Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2022 Mar 10;22(6):2142. doi: 10.3390/s22062142

Accurate Channel Estimation and Adaptive Underwater Acoustic Communications Based on Gaussian Likelihood and Constellation Aggregation

Liang Wang 1, Peiyue Qiao 2,*, Junyan Liang 2, Tong Chen 3, Xinjie Wang 2, Guang Yang 2,4
Editor: Juan V Capella
PMCID: PMC8955422  PMID: 35336313

Abstract

Achieving accurate channel estimation and adaptive communications with moving transceivers is challenging due to rapid changes in the underwater acoustic channels. We achieve an accurate channel estimation of fast time-varying underwater acoustic channels by using the superimposed training scheme with a powerful channel estimation algorithm and turbo equalization, where the training sequence and the symbol sequence are linearly superimposed. To realize this, we develop a ‘global’ channel estimation algorithm based on Gaussian likelihood, where the channel correlation between (among) the segments is fully exploited by using the product of the Gaussian probability-density functions of the segments, thereby realizing an ideal channel estimation of each segment. Moreover, the Gaussian-likelihood-based channel estimation is embedded in turbo equalization, where the information exchange between the equalizer and the decoder is carried out in an iterative manner to achieve an accurate channel estimation of each segment. In addition, an adaptive communication algorithm based on constellation aggregation is proposed to resist the severe fast time-varying multipath interference and environmental noise, where the encoding rate is automatically determined for reliable underwater acoustic communications according to the constellation aggregation degree of equalization results. Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021, and the experimental results verify the effectiveness of the two proposed algorithms.

Keywords: time-varying underwater acoustic channels, superimposed training, Gaussian likelihood, constellation aggregation, turbo equalization

1. Introduction

Underwater acoustic communication technology can be widely applied in many fields, such as marine pollution monitoring, underwater rescue, underwater autonomous underwater vehicle (AUV) positioning and navigation. However, underwater acoustic channels are characterized by a time-varying multipath. In particular, when there is relative motion between the transceivers, the channel will change rapidly, resulting in fast time-varying multi-path interference, which distorts the received signal waveform and leads to a reduction in or even failure of the decoding performance of the underwater acoustic communication system [1,2,3].

To solve the issues of time-varying underwater acoustic channels and environmental noise, an adaptive communication scheme was proposed, where the transmitter automatically selected an appropriate modulation according to the instantaneous channel state information (CSI) and signal noise ratio (SNR). The adaptive communication scheme can be mainly divided into two categories, including feedback adaptive communications and direct adaptive communications, as shown in Figure 1a,b, respectively. For feedback adaptive communications, as shown in Figure 1a, User A sends a test signal to User B. User B estimates CSI and SNR based on the test signal, and then feeds them back to User A. User A selects a modulation according to the feedback CSI and SNR, and then transmits the data information to User B by using the selected modulation [4,5]. For direct adaptive communications in Figure 1b, User A initially selects a modulation, such as the direct sequence spread spectrum (DSSS), then transmits the data information to User B by using DSSS. User B identifies the modulation (i.e., identifies DSSS), demodulates and decodes, and estimates CSI and SNR, such as a simple channel and SNR = 20 dB. According to the estimated CSI and SNR, User B selects a new modulation, such as orthogonal frequency division multiplexing (OFDM), and then feeds data information back to User A by using OFDM. Similarly, User A identifies, demodulates and decodes, and estimates CSI and SNR. Then, according to the estimated CSI and SNR, User A selects an original or new modulation, and then transmits the data information to User B by using the selected modulation [6,7]. The biggest difference in the two adaptive communications is that there is no need to send a test signal for the second scheme. Therefore, for a specified amount of data information, the second scheme saves communication time, thereby reducing or even avoiding time-variation of the channel during communications. Therefore, the second scheme is more suitable for fast time-varying channels incurred by underwater acoustic communications with moving transceivers than the first scheme.

Figure 1.

Figure 1

(a) Feedback adaptive communications; (b) direct adaptive communications.

For adaptive communications, there are mainly four selected modulations and demodulations: multiple frequency shift keying (MFSK), spread spectrum, orthogonal frequency division multiplexing (OFDM) and single carrier. The transmission rate of MFSK is low; spread spectrum technology always uses high-order spread spectrum code, which has a low communication efficiency [8]; OFDM has a poor anti-frequency-offset performance [9,10,11,12,13,14,15,16]. Therefore, with a high transmission rate and good anti-frequency-offset characteristics [17,18,19], the single carrier technology is adopted in this paper. It can be used with a variety of encoding rates to realize adaptive underwater acoustic communications with moving transceivers.

Channel estimation is one of the key factors to realize reliable adaptive communications. At present, there are mainly three kinds of underwater acoustic channel estimation algorithms, such as channel estimation algorithms based on reference signal, blind estimation algorithms and semi-blind estimation algorithms [20,21,22,23,24,25,26,27,28]. Among the three kinds of algorithms, the channel estimation capability and channel tracking capability based on the reference signal are the strongest. Much research on them has been conducted by some teams, such as the team of the University of Connecticut, the team of the Massachusetts Institute of Technology, the team of Institute of Acoustics, Chinese Academy of Sciences, and the team of Harbin Engineering University. So far, all of the above channel estimation algorithms based on reference signals have adopted the traditional time-multiplexing training sequence scheme. In order to further improve the tracking capability of time-varying channels, the joint team of Qingdao University of Technology, the University of Wollongong and the University of Western Australia [29] proposed a superimposed training scheme for underwater acoustic communications, where the training sequence and the symbol sequence are linearly superimposed in order to make the channel information of the training sequence and the symbol sequence completely consistent.

Same as literatures [29,30], the superimposed training (ST) scheme and the segment strategy are used in this paper to enhance the estimation and tracking capability of fast time-varying channels. To realize the full potential of the ST scheme and the segment strategy, a channel estimation algorithm based on Gaussian likelihood (GL) is proposed. The product of the Gaussian probability-density functions of the segments is still the Gaussian probability-density function, which can be parameterized by the mean and the variance, where the mean is the channel estimate and the variance is the deviation of the channel estimate. The variance of the Gaussian probability-density function after the product is less than the variance of the Gaussian probability-density function of each segment, which means that the estimated channel for the segment after the product is more accurate than the estimated channel for the segment itself. This is equivalent to estimating the channel information of the segment by using the ‘whole’ data block [29,30], thereby leading to the ideal channel estimation of the segment.

It is important to note that the proposed GL algorithm can achieve the same channel estimation and tracking performance as literatures [29,30] in a ‘novel’ Gaussian product way, because it can be seen as a message-passing method in the Gaussian scenario [29,30]. The message-passing thought was first proposed in literature [30] to improve the channel estimation capability; then, it was applied in underwater acoustic communications with a communication distance of approximately 1 km [29]. Different from literatures [29,30], in this paper, the same thought as message passing [29,30] is realized in a ‘novel’ Gaussian product way; in particular, the proposed algorithm is applied in actual underwater acoustic communication machines, and the effective communication distance is extended from 1 km to 5.5 km.

In addition, an adaptive communication algorithm based on constellation aggregation (CA) is proposed. The encoding rate (such as rate-1/2, rate-1/4, rate-1/8, or rate-1/16) is automatically selected based on the aggregation degree of the constellation points after linear minimum mean square error (LMMSE) equalization. The working principles of the proposed direct adaptive communications and the traditional direct adaptive communications are different. The traditional direct adaptive communications select the modulation based on CSI and SNR. However, the proposed direct adaptive communications select the encoding rate based on the constellation aggregation. The proposed algorithm based on the constellation aggregation is more accurate in making a selection than the traditional algorithm based on CSI and SNR. In order to fully realize the potential of the GL algorithm and the CA algorithm, the single-carrier communication system and turbo equalization are adopted. The channel estimator (GL), constellation aggregation decision maker (CA), equalizer and decoder are combined together, and they are performed jointly in an iterative manner (turbo equalization) to realize an accurate estimation of fast time-varying channels and reliable communications by using the information exchange between the equalizer and the decoder (turbo equalization). Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021 to verify the effectiveness of the proposed algorithms. The major contributions of this paper are summarized as follows:

  • (1)

    A channel estimation algorithm, named GL, is proposed, realizing the same performance as the bidirectional channel estimation algorithm [29,30] in a novel product way of probability-density functions;

  • (2)

    An adaptive communication algorithm based on constellation aggregation is proposed to improve the applicability of the system for different environments;

  • (3)

    GL-based channel estimation, LMMSE equalization and decoding are iteratively performed (turbo equalization), leading to a significant performance improvement of the whole system;

  • (4)

    The proposed algorithms are applied in actual underwater acoustic communication machines to verify their effectiveness.

The remainder of the paper is organized as follows. The system structure is provided in Section 2. Then, a channel estimation algorithm based on Gaussian likelihood and an adaptive communication algorithm based on constellation aggregation are shown in Section 3. Simulations, experiments and the conclusion are presented in Section 4, Section 5 and Section 6, respectively. Throughout the paper, superscripts ·Tr and ·H represent transpose and conjugate transpose, respectively.

2. System Structure

The system structure of underwater acoustic communications is shown in Figure 2. At the transmitter, the information bit sequence is encoded, interleaved and mapped by using quadrature phase shift keying (QPSK) to symbols. The training sequence and the symbol sequence are linearly superimposed, the resultant sequence is partitioned into multiple segments and then each segment is appended a cyclic prefix (CP) to avoid inter-segment interference and to facilitate low-complexity equalization. The in-phase quadrature (IQ) modulation is used for each CP plus segment. Hyperbolic frequency modulation (HFM) signals with negative and positive modulation rates are used as the head and the tail of the signal frame, respectively. Then, the resultant signals are transmitted by a transducer.

Figure 2.

Figure 2

System structure. (a) Transmitter; (b) receiver.

The HFM signals are used to estimate and eliminate the average frequency offset and to synchronize the received signals [31]. Then, the transmitted signals are extracted, band-pass filtering and IQ demodulation are carried out and CPs are removed. With the resultant signals, we estimate initial channels h^nF of all segments based on the GL algorithm and noise powers p^n, and obtain a ‘clean’ signal zn after training elimination for data equalization. Then, LMMSE equalization, CA decision and decoding are carried out based on h^nF, p^n and zn, as shown in Figure 3. They, on both sides of equalization, represent the same things. Please note that, on the right side, they represent the initial values, and, on the left side, they represent the iterative values. The iterative process proceeds until a pre-set number of iterations is reached, and difficult decisions are made on each information bit in the last iteration.

Figure 3.

Figure 3

Turbo equalization.

An illustration of the CA decision is shown in Figure 4. We can ensure the return encoding rate by comparing the pre-set values ξin and ξex. When the constellation aggregation degree ξm<ξin, the return encoding rate is increased. When the constellation aggregation degree ξm>ξex, the return encoding rate is reduced. When the constellation aggregation degree ξinξmξex, the return encoding rate is kept the same.

Figure 4.

Figure 4

Constellation aggregation decision.

The turbo equalization is shown in Figure 3. Based on h^nF, p^n and zn, the LMMSE equalization and decoding for each segment are carried out, where the LMMSE equalization can be efficiently implemented with fast Fourier transform (FFT), where the initial a priori logarithm likelihood ratios (LLRs) of the interleaved encoded bits are set to zeros, i.e., La=0. The soft detection outputs for multiple segments are collected to make up extrinsic LLRs Le, and then deinterleaving and decoding are carried out. The output of the decoder is used by the equalizer and the channel estimator, so there are two branches from the decoder. Both branches use the latest decoding results, i.e., the LLRs of encoded bits of the decoder, and they are updated in each iteration. In the first branch, the LLRs of encoded bits are interleaved and input to the equalizer. In the second branch, difficult decisions on the encoded bits are executed, followed by interleaving and QPSK mapping to obtain the (estimated) symbol sequence. They, together with the training sequence, are used for accurate channel (re)estimation. After that, based on La from the first branch and h^nF, p^n and zn from the second branch, LMMSE equalization is performed to obtain Le, which is input into the decoder for the next round of iteration (turbo equalization).

3. Accurate Channel Estimation and Adaptive Communications

A block of an information bit sequence denoted by b=b1,,bLbTr is encoded and interleaved, yielding an interleaved coded bit sequence denoted by c=c1,,cLiTr, where ci=ci1,ci2Tr. Then, c is mapped into a symbol sequence denoted by fLf=f1,,fLfTr, where each fi corresponds to a ci. Denote the periodic training sequence as tLf=t1,,tLfTr with a period of T, where tk=ejπT(k1)2,k=1,,T [32]. The training sequence and the symbol sequence are linearly superimposed with a power ratio r, yielding the transmitted signal s with length Lf, where Lf is an integer multiple of T.

Divide s into Ny segments, i.e., s=s1,,sNyTr, and the length of each segment is Ls, where Lf=Ny×Ls. Taking sn as an example, the corresponding symbol sequence is fn, and the corresponding training sequence is tLs. CP is added to each segment, yielding the channel circulant matrix denoted by Hn. Denote the white Gaussian noise as w.

Denote a segment of the received signal after CP removal as yn, and its length is an integer multiple of T, i.e., Ls=pT. Then, we can represent yn as yn=y1T,,ypTTr. The received signal yn can be written as

yn=Hnsn+w=HnrtLs+fn+w=Hnfn+rHntLs+w. (1)

Define Lc as the channel order, where TLc; then, the Toeplitz matrix formed by the training sequence can be represented as

A=t1tTtTLc+2t2t1tTLc+3tTtT1tTLc+1T×Lc. (2)

From Appendix A, based on the least squares (LS) algorithm, the channel estimate of a segment can be computed as

h^n=AHA1AH1pi=1pyiTLc×1. (3)

3.1. Accurate Channel Estimation Based on Gaussian Likelihood

Channel estimates of two consecutive segments can be expressed as two independent and identically distributed probability-density functions in the Gaussian scenario. Denote two independent and identically distributed probability-density functions as pn1(x) and pn(x). Denote μh^n1 and σh^n12 as the mean value and variance of the channel estimate h^n1 of the (n-1)-th segment, respectively, and denote μh^n and σh^n2 as the mean value and variance of the channel estimate h^n of the n-th segment, respectively. Then, we can obtain

pn1(x)=12πσh^n1exμh^n122σh^n12pn(x)=12πσh^nexμh^n22σh^n2. (4)

Denote h^nF as the channel estimate after information fusion of the channel estimate h^n1 of the (n-1)-th segment and the channel estimate h^n of the n-th segment. Denote μh^nF and σh^nF2 as the mean value and variance of the channel estimate h^nF after information fusion, respectively. Then, the product of the two probability-density functions can be expressed as

pn1(x)pn(x)=12πσh^nσh^n1exμh^n22σh^n2+xμh^n122σh^n12=12π(σh^n2+σh^n12)eμh^nμh^n122σh^n2+σh^n1212πσh^nF2exμh^nF22σh^nF2=CA12πσh^nF2exμh^nF22σh^nF2, (5)

where

μh^nF=μh^nσh^n12+μh^n1σh^n2σh^n2+σh^n12, (6)

and

σh^nF2=σh^n2σh^n12σh^n2+σh^n12. (7)

It is important to note that

σh^nF2σh^n12=σh^n2σh^n12σh^n2+σh^n12σh^n12=σh^n14σh^n2+σh^n12<0σh^nF2σh^n2=σh^n2σh^n12σh^n2+σh^n12σh^n2=σh^n4σh^n2+σh^n12<0, (8)

which means that the variance σh^nF2 after the product becomes smaller than σh^n12 and σh^n2, i.e., the fused channel estimate h^nF becomes more accurate, which is more close to the real channel than h^n1 and h^n. CA is the scale factor of the Gaussian distribution, and it is not a variable, which can be normalized. Therefore, we can obtain the Gaussian distribution pnF(x) after the product, i.e.,

pn1xpnx=pnF(x)Nμh^nF,σh^nF2, (9)

where N represents Gaussian distribution. From (7), we can acquire

1σh^nF2=σh^n2+σh^n12σh^n2σh^n12=1σh^n12+1σh^n2. (10)

From (6) and (7), i.e., μh^nF is divided by σh^nF2, we can obtain

μh^nFσh^nF2=μh^nσh^n12+μh^n1σh^n2σh^n12σh^n2=μh^n1σh^n12+μh^nσh^n2, (11)

i.e.,

μh^nF=σh^nF2μh^n1σh^n12+μh^nσh^n2. (12)

The message fusion Formulas (10) and (12) are equivalent to the message fusion Formulas (18) and (19) of literature [29], i.e., the proposed GL algorithm using a ‘novel’ Gaussian product can achieve the same performance as the bidirectional channel estimation algorithm in literature [29]. They have the same computational complexity.

The formulas of the forward passing and the backward passing are as follows [33]:

h^n=αph^n1+npσh^n2=αp2σh^n12+βI, (13)

and

h^n1=αp1(h^n+np)σh^n12=αp2σh^n2+βI, (14)

where αp is the channel correlation coefficient of the consecutive segments, np is Gaussian white noise (the mean is 0) and β is the noise power.

Take the n-th segment as an example to show the flow of the ‘global’ channel estimation, and the flow diagram is shown in Figure 5. For forward message passing, the local channel estimation h^1 of the first segment is fused with the local channel estimation h^2 of the second segment to obtain a fused channel estimation h^2f by using (10) and (12). Then, the message update h^2a can be obtained by using (13) until the fused channel estimation h^nf is acquired. For backward message passing, the local channel estimation h^Ny of the last segment is fused with the local channel estimation h^Ny1 of the (Ny-1)-th segment to obtain a fused channel estimation h^(Ny1)f by using (10) and (12). Then, the message update h^(Ny1)b can be obtained by using (14) until the fused channel estimation h^(n+1)b is acquired. Finally, h^nf and h^(n+1)b can be fused to obtain a ‘global’ channel estimation h^nF of the n-th segment. Appending a proper number of zeros to h^nF to form a length-Ls vector, i.e., h^nF=h^nF,0Ls×1.

Figure 5.

Figure 5

Accurate channel estimation of the n-th segment.

3.2. Training Interference Elimination, Estimation of Noise Power and Turbo Equalization

We use F to denote a normalized discrete Fourier transform (DFT) matrix, i.e., the (m, n)th element is given as Ls1/2ej2πmn/Ls with j=1. Take the n-th segment as an example. The circulant matrix Hn can be diagonalized by a DFT matrix, i.e., Hn=FHDnF, where Dn is a diagonal matrix. After the training interference elimination, the frequency-domain received signal can be written as

zn=znfFh^nF.FtLs=FynFh^nF.FtLs=DnFsn+DnrFtLsFh^nF.FtLs+Fwn=DnFsn+wn. (15)

Based on the estimated channel, the diagonal elements of the diagonal matrix D^n can be acquired as follows:

d^n1,d^n2,,d^nLsTr=LsFh^nF,n=1,,Ny. (16)

As the power of the transmitted symbol sequence is set to 1, the noise power σn2 for the n-th segment is the difference between the power Pyn for the received segment and the corresponding channel energy Eh^nF, i.e.,

σn2=PynEh^nF. (17)

Take the n-th segment as an example of LMMSE equalization. Following literatures [30,33,34], the a priori mean and variance of the symbol fi (the symbol sequence fn) are as follows:

mia=12tanhLnaci12+j12tanhLnaci22νia=1mia2, (18)

where both the initial values of Lnaci1 and Lnaci2 (the initial a priori LLRs of the interleaved encoded bits) are set to 0. The estimated interleaved bit sequence is converted to the symbol sequence ma=m1a,,mLsa by using (18). The a posteriori mean and variance of the symbol fi (the symbol sequence fn) are as follows:

ν1p=ν2p==νLsp=1Lsk=1Ls1v¯+d^nk2σn21mp=ma+FHD^nHσn2v¯I+D^nD^nH1znD^nFma, (19)

where v¯=1Lsi=1Lsvia and mp=m1p,,mLsp. The a posteriori mean sequence mp is the estimated symbol sequence after LMMSE equalization. It is noted that the computational complexity of the LMMSE equalizer is dominated by (19), and its computational complexity is only in the order of log(Ls) per symbol. In addition, the extrinsic mean and variance of the symbol fi (the symbol sequence fn) are as follows:

νie=1νip1νia1mie=νiemipνipmiaνia. (20)

As QPSK mapping is used, the extrinsic LLRs of the interleaved encoded bits ci1 and ci2 can be expressed as

Lneci1=22Remie/vieLneci2=22Immie/vie. (21)

The estimated symbol sequence is converted to the extrinsic LLRs (i.e., the estimated interleaved bit sequence Lne) by using (19)–(21). The extrinsic LLRs of the segments are denoted collectively as Le and then input into the decoder for the next round of iteration (turbo equalization).

3.3. Adaptive Underwater Acoustic Communications Based on Constellation Aggregation

We set a certain iteration number, such as one iteration, to obtain the aggregation degree after LMMSE equalization, i.e., (22), where f^i is the a posteriori mean of a symbol fi, and its real part and imaginary part are denoted by f^iRe and f^iIm, respectively.

ξi=f^iRe,f^iIm1/21,1,1,1,1,1,1,1=absf^iRe+j×absf^iIm1/2+j×1/2=absf^iRe1/22+absf^iIm1/22. (22)

We compute the mean of all ξi for a frame of information bits, i.e., ξm=1Lfi=1Lfξi. Denote ξin and ξex as the inner boundary and the outer boundary, respectively. As shown in Figure 4, when ξm<ξin, the encoding rate will be increased automatically; when ξm>ξex, the encoding rate will be reduced automatically; when ξinξmξex, the encoding rate will be kept.

4. Simulation Results

The simulation parameters are shown in Table 1. Rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes and QPSK mapping are used. A variety of power ratios of the training sequence and the symbol sequence, such as 0.15:1, 0.2:1, 0.25:1 and 0.3:1, are used. The standard block with 1024 symbols is divided into a number of segments with a variety of lengths, including 128 symbols, 256 symbols, 512 symbols and 1024 symbols. The corresponding cases are denoted by S128, S256, S512 and W1024, respectively, where the prefix W means that the standard block is treated as a segment. W1024 is used as the benchmark turbo system. S128, S256 and S512 are used in the proposed GL turbo system. The CP is set to 128 symbols. One frame includes 100 blocks, and one block includes 1024 information bits. Assume that a 4 kHz bandwidth is provided. For S256, with rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rates are 2667 bits/s, 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies are 0.67 bps/Hz, 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. The SNR is from −4 dB to 13 dB. A static channel, as shown in Figure 6, and the white Gaussian noise are used.

Table 1.

Parameters of simulations and experiments.

Simulation Yellow Sea 1       Yellow Sea 2
Encoding rate 1/2, 1/4, 1/8, 1/16 1/4, 1/8, 1/16 1/2, 1/4
Power ratio r 0.15:1 to 0.3:1 0.25:1 0.25:1
Segment length 128, 256, 512, 1024 sym 256, 512, 1024 sym 256 sym
CP 128 sym 128 sym 16 sym
1 frame, 1 block 100 blocks, 1024 bits 16 blocks, 1024 bits 16 blocks, 1024 bits
Mapping, system QPSK, Baseband QPSK, Single carrier QPSK, Single carrier
Time dura. of one bit 5×104, 10×104, 20×104 s 2.5×104, 5×104 s,
Time dura. of one symbol 2.5×104 s 2.5×104 s
Center frequency, Filter 12 kHz, Band pass 12 kHz, Band pass
Bandwidth 4 kHz 4 kHz
Sampling frequency 96 kHz 96 kHz
S256, Transmission rate  1333, 667, 333 bits/s    3765, 1882 bits/s
S256, Band. effi. (bps/Hz)    0.33 (1/4), 0.17, 0.08  0.94 (1/2), 0.47
Communication distance 5.5 km 5.5 km
Transducer depth 4 m 4 m
Hydrophone depth 4 m 5 m
Relative speed 0.5 m/s 0.5 m/s
SNR −4 dB to 13 dB approximately 9 dB approximately 13 dB

Figure 6.

Figure 6

Impulse response of a channel.

The BER performance for S256 is shown in Figure 7. It can be seen from the results that both the ST scheme and the channel estimate fusion based on Gaussian likelihood are effective. The lower the encoding rate, the better the BER performance that the system can achieve. Taking Figure 7b, with SNR = 7 dB and rate-1/4 convolutional code, after three iterations, 100 blocks of information bits are correctly decoded. From Figure 7, after two iterations, all information bits with rate-1/2 convolutional code and SNR = 13 dB (Figure 7a), with rate-1/4 convolutional code and SNR = 8 dB (Figure 7b), with rate-1/8 convolutional code and SNR = 3 dB (Figure 7c) and with rate-1/16 convolutional code and SNR = 0 dB (Figure 7d) are correctly decoded.

Figure 7.

Figure 7

BER performance for S256. (a) Rate-1/2 convolutional code; (b) rate-1/4 convolutional code; (c) rate-1/8 convolutional code; (d) rate-1/16 convolutional code.

Taking a block of 1024 information bits with rate-1/4 convolutional code as an example, where the noise is not added, the channel in Figure 6 is used, and the channel estimation and equalization results are shown in Figure 8. S256 and a static channel are used; therefore, the channels of the four consecutive segments are the same and their channels are perfectly correlated, i.e., αp=1. When turbo equalization is not used, i.e., with 0 iterations, the corresponding channel estimate and equalization results are shown in Figure 8(a1),(b1). The estimated channel in Figure 8(a1) is obviously different from the real channel in Figure 6, and the constellation points after LMMSE equalization in Figure 8(b1) are significantly scattered. When turbo equalization is performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 8(b2), where it is noted that the estimated channels have been updated before turbo equalization, and the aggregation degree of the constellation points after LMMSE equalization becomes significantly better. When turbo equalization is performed two times, i.e., after two iterations, the corresponding equalization results are shown in Figure 8(b3), where the constellation points after LMMSE equalization are ideally condensed together, and the corresponding estimated channel in Figure 8(a2) is exactly the same as the real channel in Figure 6, demonstrating the effectiveness of the ST scheme used for enhancing the channel estimation and tracking capability and the GL algorithm used for the channel information fusion of the segments.

Figure 8.

Figure 8

Estimated channels and constellations of a block of 1024 information bits with S256 in simulations at SNR = 13 dB. (a1) Channel estimate of one segment without iteration; (a2) channel estimate of one segment after 2 iterations; (b1) constellations after 0 iteration; (b2) constellations after 1 iteration; (b3) constellations after 2 iterations; (b4) constellations after 3 iterations.

From Figure 8b, it is clear that we can carry out adaptive communications according to the pre-set constellation aggregation degree threshold. It is important to note that we do not show the adaptive communication performance, as we can see the results clearly from Figure 7.

Next, we test the BER performance with a variety of power ratios of the training sequence and the symbol sequence. Taking S128 with rate-1/2 convolutional code as an example, the BER performance of the system is shown in Figure 9a. The green triangle line represents the BER performance with a power ratio 0.2:1 and SNR = 13 dB; after three iterations, 100 blocks of information bits are correctly decoded. Considering the complexity and variability of underwater acoustic channels incurred by the moving transceivers, the power ratio 0.25:1 is used in the follow-up simulations and experiments. Assuming that the SNR = 13 dB and the power ratio is 0.25:1, the BER performance of the system is shown in Figure 9b. The blue star line represents the BER performance of the system with the training interference elimination; after two iterations, 100 blocks of information bits are correctly decoded, demonstrating the effectiveness of the training interference elimination. The pink square line represents the BER performance of the system without the training interference elimination, and it can be seen that, if we do not use the training interference elimination, the system simply does not work.

Figure 9.

Figure 9

BER performance of the system with S128 and SNR = 13 dB. (a) A variety of power ratios of the training sequence and the symbol sequence; (b) with or without the training interference elimination. The power ratio is 0.25:1.

Then, we test the BER performance of the system by using the ST scheme and the GL algorithm, where W1024 is used as the benchmark turbo system. The BER performance comparison is shown in Figure 10. From Figure 10a, if the GL algorithm is not used, the system with a variety of segment lengths does not work. From Figure 10b, if the GL algorithm is used to fuse the local channel estimates to obtain global channel estimates, it can be seen that, no matter how long the segment is, the BER performances for S128, S256, S512 and W1024 are similar. This is because, regardless of the segment length, the ‘whole’ standard symbol block is used to acquire the global channel estimate for each segment. This demonstrates that the proposed GL turbo system (S128, S256 and S512) can achieve a similar performance as the benchmark turbo system (W1024).

Figure 10.

Figure 10

BER performance of the system with SNR = 13 dB, rate-1/2 convolutional code and the power ratio 0.25:1. (a) The GL algorithm is not used for channel estimation; (b) the GL algorithm is used for channel estimation.

5. Experimental Results

Two separate underwater acoustic communication experiments with moving transceivers were carried out in the Yellow Sea in 2021, named Yellow Sea 1 and Yellow Sea 2, respectively. Their deployments are shown in Figure 11a,b, respectively. We did not use a vertical array in the experiments. The two receiving hydrophones were completely independent, i.e., they had nothing to do with each other. Therefore, multiple receiver channels were not exploited in the system. The height of the sea waves was from 0.5 m to 1 m; the sea temperature was 5.6 °C; the south wind was from level 3 to level 4; the ship with the transducer floated away from the ship with the hydrophone at a speed of approximately 0.5 m/s.

Figure 11.

Figure 11

Experimental deployment. (a) Yellow Sea 1; (b) Yellow Sea 2.

The detailed experimental parameters are shown in Table 1. For the two experiments, QPSK mapping and the power ratio of the training sequence and the symbol sequence 0.25:1 were used; both the communication distances of the transceivers were approximately 5.5 km; one frame included 16 blocks, and one block included 1024 information bits; the single-carrier communication system was used; the center frequency was 12 kHz with a bandwidth of 4 kHz; and the sampling frequency was 96 kHz. The signal structure for field experiments is shown in Figure 12.

Figure 12.

Figure 12

Signal structure for field experiments.

5.1. Adaptive Underwater Acoustic Communications with SNR = 9 dB

The experimental deployment and instruments for Yellow Sea 1 are shown in Figure 11a and Figure 13, respectively. Rate-1/4, rate-1/8 and rate-1/16 convolutional codes were adopted. S256, S512 and W1024 were used, and the CP was set to 128 symbols. Taking S256, for rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rate were 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. Both transceivers were deployed at a depth of 4 m.

Figure 13.

Figure 13

Experimental instruments in the Yellow Sea. (a) The whole system; (b) transmitter; (c) power amplifier; (d) the transducer and the hydrophone; (e) receiver.

We firstly used rate-1/16 convolutional code, and the BER performance of 16 data blocks based on the GL algorithm is shown in Figure 14. By comparing the results of S256, S512 and the benchmark turbo system (W1024), it can be seen that S256 was much more effective than S512 and the benchmark turbo system for underwater acoustic communications with moving transceivers. After only one iteration, all information bits with S256 were correctly decoded. However, both S512 (pink square curve) and the benchmark turbo system (blue dotted curve) are completely invalid. This is because moving communications incur time-varying channels. The average channel estimate does not effectively represent the channel information of the 512 symbol block and 1024 symbol block. Taking the first block for S256 in Figure 14 as an example, as S256 was used, there were four consecutive segments for the first data block, and their channels were different due to floating transceivers, i.e., αp1. It is important to note that αp can be obtained automatically, and can be calculated by using the estimated channels of the four segments, as αp was equal to the correlated coefficient of the estimated channels of the four segments, which was also used in the initial channel estimation. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 15(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined αp was recalculated by using the updated channel estimates of the four segments. When turbo equalization was performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 15(b2), where the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 15a were significantly different, where αp = 0.07 after one iteration, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm.

Figure 14.

Figure 14

BER performance of the GL turbo system with rate-1/16 convolutional code for Yellow Sea 1.

Figure 15.

Figure 15

Estimated channels and equalization results of the first block with S256 and 1/16 code rate for Yellow Sea 1 in Figure 14. (a) The estimated channels of the consecutive four segments after 1 iteration; (b1) constellations after 0 iteration; (b2) constellations after 1 iteration.

Then, we carried out field experiments with a variety of convolutional codes to test the effectiveness of direct adaptive communications. The adaptive threshold setting is shown in Table 2. We used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξin = 0.03 and the outer boundary is set to ξex = 0.2. When the mean aggregation degree is ξm<0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξm>0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03ξm0.2, the encoding (transmission) rate will be kept the same.

Table 2.

Threshold setting of mean aggregation degree ξm after one iteration for Yellow Sea 1 and Yellow Sea 2.

ξm< 0.03 Improve the encoding rate
ξm> 0.2 Reduce the encoding rate
0.03 ≤ ξm ≤ 0.2 Keep the encoding rate

The calculation of the mean aggregation degree ξm is shown in Table 3. For Yellow Sea 1, assuming that rate-1/16 convolutional code was used first, after one iteration, the mean aggregation degree was ξm = 0.002, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the encoding rate was adjusted from rate-1/16 convolutional code to rate-1/8 convolutional code automatically. We can see that, after one iteration, with rate-1/8 convolutional code, the mean aggregation degree was ξm = 0.0591, which belonged to 0.03,0.2. Therefore, the encoding rate was kept the same. Assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξm = 0.4442, which was more than 0.2. Therefore, the encoding rate was reduced automatically, i.e., the encoding rate was adjusted from rate-1/4 convolutional code to rate-1/8 convolutional code automatically. As after one iteration with rate-1/8 convolutional code, the mean aggregation degree was ξm = 0.0591, which belonged to 0.03,0.2, the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 is shown in Figure 16. From Figure 16b, after one iteration with rate-1/8 convolutional code, the constellation points of the 16 blocks of information bits were obviously clustered.

Table 3.

Calculation of the mean aggregation degree ξm for Yellow Sea 1 and Yellow Sea 2.

Iteration Number Yellow Sea 1 Yellow Sea 2
Rate-1/4 Rate-1/8 Rate-1/16 Rate-1/2 Rate-1/4
0 0.5546 0.5505 0.5613 0.4719 0.5396
1 0.4442 0.0591 0.002 0.041 0.007
2 0.2953 3.44 × 106 3.09 × 106 0.0018 1.51 × 106

Figure 16.

Figure 16

Constellations of the 16 blocks of data with S256 after 0, 1 and 2 iterations for Yellow Sea 1 in Figure 17. (a1) Rate-1/4, 0 iteration; (a2) rate-1/4, 1 iteration; (a3) rate-1/4, 2 iterations; (b1) rate-1/8, 0 iteration; (b2) rate-1/8, 1 iteration; (b3) rate-1/8, 2 iterations; (c1) rate-1/16, 0 iteration; (c2) rate-1/16, 1 iteration; (c3) rate-1/16, 2 iterations.

The BER performance based on the ST scheme and the GL algorithm with S256 for Yellow Sea 1 is shown in Figure 17. We can see that, after one iteration, the decoding with rate-1/4 convolutional code was invalid, and the decodings with rate-1/8 convolutional code and rate-1/16 convolutional code were valid. With rate-1/8 convolutional code, all information bits were correctly decoded after one iteration, and the BER performance was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/8 convolutional code was kept, which was in keeping with the result from the mean aggregation degree, demonstrating the effectiveness of the GL algorithm and the CA algorithm.

Figure 17.

Figure 17

BER performance of the GL turbo system with S256 for Yellow Sea 1.

5.2. Adaptive Underwater Acoustic Communications with SNR = 13 dB

The experimental deployment in the Yellow Sea is shown in Figure 11b. An underwater acoustic communication machine (Seatrix Modem) was used, whose illustration and dimensions are shown in Figure 18 and Figure 19, respectively. The introduction of the machine is listed in Table 4. An SD card was plugged in Seatrix Modem to collect data at the receiver, and the collected data were analyzed by using a computer. The transmitting ship floated away from the receiving ship at a speed of approximately 0.5 m/s. For Yellow Sea 2, rate-1/2 and rate-1/4 convolutional codes were used. S256 was used, and the CP was set to 16 symbols. The transmission rates were 3765 bits/s and 1882 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.94 bps/Hz (rate-1/2) and 0.47 bps/Hz (rate-1/4), respectively. The deployment depths of the transducer and the hydrophone were 4 m and 5 m, respectively. The main goal of the experiment is to demonstrate a successful implementation on modem hardware for the proposed algorithms. As the receiver in Section 5.2 has a higher SNR than the receiver in Section 5.1, higher code rates can be used in Section 5.2. As the communication environments in Section 5.1 and Section 5.2 are similar, their channels are comparable.

Figure 18.

Figure 18

Illustration of underwater acoustic communication machine (Seatrix Modem). (a) Single machine; (b) multiple machines.

Figure 19.

Figure 19

Dimension of underwater acoustic communication machine (Seatrix Modem).

Table 4.

Introduction of underwater acoustic communication machine (Seatrix Modem).

Communication frequency 9 kHz to 14 kHz (10 kHz to 14 kHz was used)
Communication distance 6000 m with high SNR
Work depth 2000 m
Electrical parameters Received power consumption 1 w;
Transmission power consumption 10 w to 60 w;
Built-in 400 wh rechargeable battery;
RS-232 interface.
Mechanical parameters Length 790 mm × Width (Diameter) 130 mm;
Weight 15 kg (including the battery)

We used rate-1/2 and rate-1/4 convolutional codes and S256. The BER performance based on the GL algorithm is shown in Table 5. It can be seen that S256 was very effective for underwater acoustic communications with moving transceivers. After only one iteration, all information bits were correctly decoded. Taking the fourth block with rate-1/2 convolutional code in Table 5 as an example, there were four consecutive segments for the fourth data block, and their channels were different due to floating transceivers, i.e., αp1. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 20(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined αp was updated. When turbo equalization was performed once, i.e., after one iteration, as shown in Figure 20(b2), the constellation points after LMMSE equalization were still scattered. After three iterations, as shown in Figure 20(b4), the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 20a were significantly different, where αp = 0.09 after three iterations, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm. Comparing Figure 15a in Yellow Sea 1 and Figure 20a in Yellow Sea 2, it can be seen that their channel lengths are almost the same. This is because the communication environments are basically the same. We did not show BERs for S512 and W1024, as they basically do not work.

Table 5.

BER performance of the GL turbo system with S256 at SNR = 13 dB for Yellow Sea 2.

Block Number Rate-1/2 Rate-1/4
Iteration Number Iteration Number
0 1 2 0 1 2
1 8.8% 0 0 0 0 0
2 2.3% 0 0 0 0 0
3 0.7% 0 0 0 0 0
4 3.1% 0 0 0 0 0
5 0 0 0 0 0 0
6 0 0 0 0 0 0
7 0 0 0 0 0 0
8 0 0 0 0 0 0
9 0 0 0 2.0% 0 0
10 0 0 0 0 0 0
11 0 0 0 0 0 0
12 0 0 0 0 0 0
13 0 0 0 0 0 0
14 0 0 0 0 0 0
15 0 0 0 0 0 0
16 0 0 0 0 0 0
Mean 0.9% 0 0 0.1% 0 0

Figure 20.

Figure 20

Estimated channels and equalization results of the fourth block with S256 and 1/2 code rate for Yellow Sea 2 in Figure 14. (a) The estimated channels of the consecutive four segments after 3 iterations; (b1) constellations after 0 iteration; (b2) constellations after 1 iteration; (b3) constellations after 2 iterations; (b4) constellations after 3 iterations.

Then, we tested the effectiveness of direct adaptive communications with real underwater acoustic communication machines. The adaptive threshold setting is shown in Table 2. We still used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξin = 0.03 and the outer boundary is set to ξex = 0.2. When the mean aggregation degree is ξm<0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξm>0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03ξm0.2, the encoding (transmission) rate will be kept the same.

The calculation of the mean aggregation degree ξm is shown in Table 3. For Yellow Sea 2, assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξm = 0.007, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the channel code was adjusted from rate-1/4 convolutional code to rate-1/2 convolutional code automatically. After one iteration with rate-1/2 convolutional code, the mean aggregation degree was ξm = 0.041, which belonged to 0.03,0.2. Therefore, the encoding rate was kept the same. Assuming that rate-1/2 convolutional code was used firstl, after one iteration, the mean aggregation degree was ξm = 0.041, which belonged to 0.03,0.2; therefore, the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 for Yellow Sea 2 is shown in Figure 21. From Figure 21a, after one iteration with rate-1/2 convolutional code, the constellation points of the 16 blocks of information bits became obviously clustered.

Figure 21.

Figure 21

Constellations of the 16 blocks of data with S256 after 0, 1 and 2 iterations for Yellow Sea 2 in Table 5. (a1) Rate-1/2, 0 iteration; (a2) rate-1/2, 1 iteration; (a3) rate-1/2, 2 iterations; (b1) rate-1/4, 0 iteration; (b2) rate-1/4, 1 iteration; (b3) rate-1/4, 2 iterations.

The BER performance based on the GL algorithm with S256 is shown in Table 5. After one iteration, the decoding performance with rate-1/2 convolutional code was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/2 convolutional code was kept, which was in keeping with the result from the mean aggregation degree. The experiment demonstrated the effectiveness and practicability of the proposed algorithms in real underwater acoustic communication machines.

From the above simulations and experimental results, we can conclude that the best segment length is 2n, and should be close to and longer than the channel length, and shorter than a period of the training sequence, where n is an integer. Considering the transmission rate and time variation of the channel, S256 is better than S128, S512 and W1024. The two separate experimental results show that, with SNR = 9 dB, the 1/8 code rate is effective; with SNR = 13 dB, the 1/2 code rate is effective. Even if αp = 0.07 and αp = 0.09 (αp can be obtained by calculating the correlation coefficient of the consecutive segments), i.e., the channels of the four segments are weakly correlated, the proposed system is still effective.

6. Conclusions

The GL algorithm and the CA algorithm have been proposed to achieve a global accurate channel estimation of each segment and automatic encoding rate adjustment. To improve the estimation and tracking capability of time-varying channels, the ST scheme has been used. For channel estimate fusion of the segments, S256 is the best for practical moving underwater acoustic communications. Even if the channel correlation coefficients of the segments are as low as 0.7 and 0.9, the proposed GL turbo system is still effective. The experimental results demonstrates that a 1/8 code rate is effective at SNR = 9 dB, and a 1/2 code rate is effective at SNR = 13 dB. In the process of iteration, direct adaptive communications based on constellation aggregation have been realized. The experimental results illustrated that the encoding rate can be adjusted automatically among the 1/2 code rate, 1/4 code rate, 1/8 code rate and 1/16 code rate by using the mean aggregation degree decision. Simulations and experimental results have verified the effectiveness of the proposed system.

Appendix A

Assuming that the circulant matrix of the impulse response for the channel is Hn, the transmitted signals are sn and the white Gaussian noise is w. Then, the received signal can be represented as (A1).

yn=Hnsn+w=HnrtLs+fn+w=h100h2h2h10h2h1hLchLch200hLc000hLc0000h1Ls+Lc1×Ls+Lc1s1s2sLs000Ls+Lc1×1+w=h1h2hLcLc×1s1s2sLsLs×1+w=hnsn+w. (A1)

Take elements of hn, sn, tLs,fn and w to build the channel estimator based on the LS algorithm. An element of the impulse response of the channel is denoted by hn; an element of the symbol sequence is denoted by fn; an element of the periodic training sequence is denoted by tn with a period of T; and the superimposed symbol sequence is denoted by sn=fn+r×tn. As r is a constant number, the superimposed symbol sequence can be simplified as sn=fn+tn. Then, an element of the received signal sequence can be expressed as

yn=hnsn+wn=i=1Lcf(n+1i)hi+i=1Lct(n+1i)hi+wn, (A2)

where wn and Lc is the white Gaussian noise and the channel length, respectively. Assuming that the mean value of the symbol sequence is 0, and TLc, then we can obtain

Eyn=i=1Lctn+1ihi. (A3)

Let Ls=pT. We divide Eyn into p subsegments with a length of T, i.e.,

Eyn=Ey1y2yLsLs×1=Ey1Ty2TypT, (A4)

where

EyiT=tiT+1tiTtiTLc+2tiT+2tiT+1tiTLc+3tiT+TtiT+T1tiT+TLc+1T×Lch1h2hLcLc×1Δ=Ahn. (A5)

As we use a periodic training sequence with a period of T, we can acquire

A=t1tTtTLc+2t2t1tTLc+3tTtT1tTLc+1. (A6)

It can be seen from (A5) that A is a Toeplitz matrix, and its mean is not related to i. A cumulative sum calculation is performed for Eyn. When p is a large value, we can obtain

1pi=1pEyiT=1pi=1pyiT=Ahn. (A7)

When TLc, based on the LS algorithm, we can obtain (3).

Author Contributions

Conceptualization, L.W. and G.Y.; methodology, L.W. and P.Q.; validation, P.Q. and L.W.; formal analysis, L.W. and J.L.; investigation, G.Y. and L.W.; resources, L.W.; data curation, L.W. and G.Y.; writing—original draft preparation, P.Q., J.L. and G.Y.; writing—review and editing, T.C., L.W., G.Y., J.L. and P.Q.; visualization, P.Q. and J.L.; supervision, L.W.; project administration, L.W. and G.Y.; funding acquisition, L.W., X.W. and G.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the China Scholarship Council, in part by the General Program of National Natural Science Foundation of China under Grant 61771271, in part by the General Project of Natural Science Foundation of Shandong Province under Grant ZR2020MF010 and Grant ZR2020MF001 and in part by the Qingdao Source Innovation Program under Grant 19-6-2-4-cg.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Qarabaqi P., Milica S. Statistical characterization and computationally efficient modeling of a class of underwater acoustic communication channels. IEEE J. Ocean. Eng. 2013;38:701–717. doi: 10.1109/JOE.2013.2278787. [DOI] [Google Scholar]
  • 2.Song A., Stojanovic M., Chitre M. Editorial underwater acoustic communications: Where we stand and what is next? IEEE J. Ocean. Eng. 2019;44:1–6. doi: 10.1109/JOE.2018.2883872. [DOI] [Google Scholar]
  • 3.Yang T.C. Properties of underwater acoustic communication channels in shallow water. J. Acoust. Soc. Am. 2012;131:129–145. doi: 10.1121/1.3664053. [DOI] [PubMed] [Google Scholar]
  • 4.Benson A., Proakis J., Stojanovic M. Towards robust adaptive acoustic communications; Proceedings of the OCEANS 2000 MTS/IEEE Conference and Exhibition. Conference Proceedings (Cat. No. 00CH37158); Providence, RI, USA. 11–14 September 2000; pp. 1243–1249. [Google Scholar]
  • 5.Radosevic A., Ahmed R., Duman T.M., Proakis J.G., Stojanovic M. Adaptive OFDM modulation for underwater acoustic communications: Design considerations and experimental results. IEEE J. Ocean. Eng. 2014;39:357–370. doi: 10.1109/JOE.2013.2253212. [DOI] [Google Scholar]
  • 6.Wan L., Zhou H., Xu X., Huang Y., Zhou S., Shi Z., Cui J.H. Adaptive modulation and coding for underwater acoustic OFDM. IEEE J. Ocean. Eng. 2015;40:327–336. doi: 10.1109/JOE.2014.2323365. [DOI] [Google Scholar]
  • 7.Barua S., Rong Y., Nordholm S., Chen P. Adaptive modulation for underwater acoustic OFDM communication; Proceedings of the OCEANS 2019-Marseille; Marseille, France. 17–20 June 2019; pp. 1–5. [Google Scholar]
  • 8.Chao L., Franck M., Fan Y. Demodulation of chaos phase modulation spread spectrum signals using machine learning methods and its evaluation for underwater acoustic communication. Sensors. 2018;18:4217. doi: 10.3390/s18124217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Yang W., Yang T.C. High-frequency channel characterization for M-ary frequency-shift-keying underwater acoustic communications. J. Acoust. Soc. Am. 2006;120:2615–2626. doi: 10.1121/1.2346133. [DOI] [Google Scholar]
  • 10.Yang T.C. Spatially multiplexed CDMA multiuser underwater acoustic communications. IEEE J. Ocean. Eng. 2016;41:217–231. doi: 10.1109/JOE.2015.2412993. [DOI] [Google Scholar]
  • 11.Yin J., Yang G., Huang D., Jin L., Guo Q. Blind adaptive multi-user detection for under-ice acoustic communications with mobile interfering users. J. Acoust. Soc. Am. 2017;141:70–75. doi: 10.1121/1.4974757. [DOI] [PubMed] [Google Scholar]
  • 12.Ma L., Zhou S., Qiao G., Liu S., Zhou F. Superposition coding for downlink underwater acoustic OFDM. IEEE J. Ocean. Eng. 2017;42:175–187. doi: 10.1109/JOE.2016.2540741. [DOI] [Google Scholar]
  • 13.Tao J. DFT-precoded MIMO OFDM underwater acoustic communications. IEEE J. Ocean. Eng. 2018;43:805–819. doi: 10.1109/JOE.2017.2735590. [DOI] [Google Scholar]
  • 14.Qiao G., Song Q., Ma L., Wan L. A low-complexity orthogonal matching pursuit based channel estimation method for time-varying underwater acoustic OFDM systems. Appl. Acoust. 2019;148:246–250. doi: 10.1016/j.apacoust.2018.12.026. [DOI] [Google Scholar]
  • 15.Roudsari H., Bousquet J. A time-varying filter for doppler compensation applied to underwater acoustic OFDM. Sensors. 2018;19:105. doi: 10.3390/s19010105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Bai Y., Bouvet P.J. Orthogonal chirp division multiplexing for underwater acoustic communication. Sensors. 2018;18:3815. doi: 10.3390/s18113815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Xi J., Yan S., Xu L. Direct-adaptation based bidirectional turbo equalization for underwater acoustic communications: Algorithm and undersea experimental results. J. Acoust. Soc. Am. 2018;143:2715–2728. doi: 10.1121/1.5036730. [DOI] [PubMed] [Google Scholar]
  • 18.Yin J., Ge W., Han X., Guo L. Frequency-domain equalization with interference rejection combining for single carrier multiple-input multiple-output underwater acoustic communications. J. Acoust. Soc. Am. 2020;147:EL138–EL143. doi: 10.1121/10.0000711. [DOI] [PubMed] [Google Scholar]
  • 19.Qin X., Qu F., Zheng Y.R. Bayesian iterative channel estimation and turbo equalization for multiple-input-multiple-output underwater acoustic communications. IEEE J. Ocean. Eng. 2021;46:326–337. doi: 10.1109/JOE.2019.2956299. [DOI] [Google Scholar]
  • 20.Wang S., Liu M., Li D. Bayesian learning-based clustered-sparse channel estimation for time-varying underwater acoustic OFDM communication. Sensors. 2021;21:4889. doi: 10.3390/s21144889. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Sun L., Wang M., Zhang G., Li H., Huang L. Filtered multitone modulation underwater acoustic communications using low-complexity channel-estimation-based MMSE turbo equalization. Sensors. 2019;19:2714. doi: 10.3390/s19122714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Cen Y., Liu M., Li D., Meng K., Xu H. Double-scale adaptive transmission in time-varying channel for underwater acoustic sensor networks. Sensors. 2021;21:2252. doi: 10.3390/s21062252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Berger C.R., Zhou S., Preisig J.C., Willett P. Sparse channel estimation for multicarrier underwater acoustic communication: From subspace methods to compressed sensing. IEEE Trans. Signal Process. 2010;58:1708–1721. doi: 10.1109/TSP.2009.2038424. [DOI] [Google Scholar]
  • 24.Huang J., Zhou S., Huang J., Berger C.R., Willett P. Progressive inter-carrier interference equalization for OFDM transmission over time-varying underwater acoustic channels. IEEE J. Sel. Top. Signal Process. 2011;5:1524–1536. doi: 10.1109/JSTSP.2011.2160040. [DOI] [Google Scholar]
  • 25.Wang Z., Zhou S., Preisig J.C., Pattipati K.R., Willett P. Clustered adaptation for estimation of time-varying underwater acoustic channels. IEEE Trans. Signal Process. 2012;66:3079–3091. doi: 10.1109/TSP.2012.2189769. [DOI] [Google Scholar]
  • 26.Tadayon A., Stojanovic M. Iterative sparse channel estimation and spatial correlation learning for multichannel acoustic OFDM systems. IEEE J. Ocean. Eng. 2019;44:820–836. doi: 10.1109/JOE.2019.2932662. [DOI] [Google Scholar]
  • 27.Martins N.E., Jesus S.M. Blind estimation of the ocean acoustic channel by time-frequency processing. IEEE J. Ocean. Eng. 2006;31:646–656. doi: 10.1109/JOE.2005.850915. [DOI] [Google Scholar]
  • 28.Cai J., Su W., Zhang S., Chen K., Wang D. A semi-blind joint channel estimation and equalization single carrier coherent underwater acoustic communication receiver; Proceedings of the OCEANS 2016-Shanghai; Shanghai, China. 10–13 April 2016; pp. 1–6. [Google Scholar]
  • 29.Yang G., Guo Q., Ding H., Yan Q., Huang D. Joint message-passing-based bidirectional channel estimation and equalization with superimposed training for underwater acoustic communications. IEEE J. Ocean. Eng. 2021;46:1463–1476. doi: 10.1109/JOE.2021.3057916. [DOI] [Google Scholar]
  • 30.Guo Q., Ping L., Huang D. A low-complexity iterative channel estimation and detection technique for doubly selective channels. IEEE Trans. Wirel. Commun. 2009;8:4340–4349. [Google Scholar]
  • 31.Pelekanakis K., Chitre M. Robust equalization of mobile underwater acoustic channels. IEEE J. Ocean. Eng. 2015;40:775–784. doi: 10.1109/JOE.2015.2469895. [DOI] [Google Scholar]
  • 32.Orozco-Lugo A.G., Lara M.M., McLernon D.C. Channel estimation using implicit training. IEEE Trans. Signal Process. 2004;52:240–254. doi: 10.1109/TSP.2003.819993. [DOI] [Google Scholar]
  • 33.Guo Q., Huang D. A concise representation for the soft-in soft-out LMMSE detector. IEEE Commun. Lett. 2011;15:566–568. [Google Scholar]
  • 34.Guo Q., Huang D., Nordholm S., Xi J., Yu Y. Iterative frequency domain equalization with generalized approximate message passing. IEEE Signal Process. Lett. 2013;20:559–562. doi: 10.1109/LSP.2013.2256783. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES