Skip to main content
Entropy logoLink to Entropy
. 2021 Dec 14;23(12):1679. doi: 10.3390/e23121679

Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends

Weijie Zhao 1,2, Xuechen Chen 1,*
Editors: Tobias Koch, Stefan M Moser
PMCID: PMC8700772  PMID: 34945984

Abstract

In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two types of methods are proposed to design nonparametric mappings. The first one is based on the joint optimization between the encoder and decoder with the use of an iterative algorithm. In the second method, we derive the necessary conditions to develop the optimal encoder numerically. Using these necessary conditions, an algorithm based on gradient descent search is designed. Subsequently, the characteristics of the optimized encoding mapping structure are discussed, and inspired by which, several parametric mappings are proposed. Numerical results show that the proposed parametric mappings outperform the uncoded scheme and previous parametric mappings for broadcast channels with infinite resolution ADC front ends. The nonparametric mappings succeed in outperforming the parametric mappings. The causes for the differences between the performances of two nonparametric mappings are analyzed. The average distortions of the parametric and nonparametric mappings proposed here are close to the bound for the cases with one-bit ADC front ends in low channel signal-to-noise ratio regions.

Keywords: joint source channel coding, zero-delay transmission, broadcast channel, bivariate Gaussian sources, average distortion, one-bit ADC

1. Introduction

Traditional digital communication systems, based on Shannon’s separation principle between source and channel coding [1], concentrate on mappings with long block lengths. Although these separated systems are not very robust to the channel variation, optimality can be achieved given that no constraints are considered in terms of complexity and delay. However, these systems have become unsuitable for certain emerging applications that require transmission in extreme latency constraints, such as those involving the internet of things (IoT) technologies [2] or wireless sensor networks (WSNs) [3]. Based on these applications scenarios, strict delay constraints are present owing to the near real-time monitoring and feedback between users and the underlying physical systems. For example, with the full realization of the industry 4.0 revolution in the forthcoming sixth-generation (6G) connection standards, machine controls are expected to achieve real-time operations with guaranteed microsecond delay jitter [4].

As a result, we consider the extreme case of JSCC, zero-delay transmission, where a single source sample is transmitted over a single use of the channel.

A well-known approach for zero-delay transmission is the linear scheme, in other words, the uncoded scheme that can achieve the minimum squared distortion for a Gaussian source transmitted over an additive white Gaussian noise (AWGN) channel with an input power constraint [5]. In the point-to-point setting, the linear scheme is an alternative to the optimal separate source and channel coding (SSCC). The linear scheme outperforms SSCC in terms of simplicity and zero-delay, specifically in applications that include the uncoded video transmission [6] and real-time control system for IoT [7]. However, at times, the linear scheme is not sufficient for exploiting the additional degrees-of-freedom available in the multi-terminal system. In many multi-terminal scenarios, both the SSCC and linear schemes underperform in terms of optimality. In [8], Bross et al. proved that, for the transmission of a memoryless bivariate Gaussian source over the Gaussian broadcast channel (GBC), the uncoded scheme achieves the optimality whenever the channel signal-to-noise ratio (CSNR) is below a certain threshold. To date, various zero-delay analog mappings, including parametric and nonparametric mappings, have been proposed for different scenarios [9,10,11]. In [12,13,14], hybrid digital and analog (HDA) schemes for zero-delay transmission to obtain superior performances to the uncoded schemes in various multi-terminal cases have been reported.

The analog-to-digital converter (ADC) plays an important role in the receiving antenna as the key component of the front end of the digital receiver. The power consumption of the ADCs increases exponentially with its resolution [15]. The above drawback leads to a growing concern of the energy consumption of the receiving ends. In [16], Jeon et al. proposed computationally efficient yet near-optimal soft-output detection methods for coded millimeter-wave (mmWave) multiple input multiple output (MIMO) systems with low-precision ADCs. The proposed method provides significant gains compared to the existing techniques in the same setting with the use of low-precision ADCs. In [17], Dong et al. analyzed the uplink performance of a multiuser massive MIMO system with spatially correlated channels with low-precision ADCs. Herein, we consider an extreme case, namely, one-bit ADCs, which can be realized by a simple threshold comparator, regardless of the need for mechanical gain control [18,19].

The advantages of the one-bit ADC front end on the performance of a specific communication system have been analyzed in the literature for numerous models. In [20], a low-complexity, near-maximum-likelihood-detection (near-MLD) algorithm was presented for an uplink massive MIMO system with one-bit ADCs, where the authors prove that the proposed algorithm achieves near-MLD performance, while the computational complexity was reduced compared with the existing method. In [21], supervised-learning technique in machine learning is exploited to provide efficient and robust channel estimation and data detection in massive MIMO systems with one-bit ADCs. In [22], conditional adversarial networks in the channel estimation for a massive MIMO system with one-bit ADCs were studied. Channel estimation algorithms were developed to exploit the low-rank property of mmWave channels with one-bit ADCs at the receivers [23]. The proposed methods achieve better channel reconstruction than compressed sensing-based techniques aiming at exploitation of sparsity of mmWave channels. In [24], Morteza et al. considered the zero-delay transmission of a Gaussian source over an AWGN channel with one-bit ADC front end and correlated side information at the receiver. Numerical results demonstrate the periodicity of the optimized encoder mapping.

Information transmission over broadcast channels is an appealing problem in multi-terminal communications. Numerous efforts have been expended in recent studies that focused on low/zero-delay transmission in this case. The asymptotic energy-distortion performance of zero-delay communication was investigated in [25] under the setting of Gaussian broadcasting. A constant lower bound on the energy-distortion dispersion pair is derived as well. In [26], the authors focused on the optimization of parametric continuous mappings that satisfy the individual quality of service requirements. By contrast [27], Tian et al. provided a complete characterization of the achievable distortion region for the above problem. In [28], Hassanin et al. proposed a low complexity, low delay, analog JSCC system based on the extensions of nested quantization techniques. In [29], they further presented the procedure for optimization of the decoding functions and analyzed the assessed performance improvements. For the case of lossy transmission of a Gaussian source over a GBC in instances where there is correlated side information at the receiver, a practical, low delay digital scheme was studied [30]. With the idea of layered superposition transmission and the successive canceling method, the proposed scheme shows higher accuracy of source reconstruction compared with SSCC. In [31], Saleh et al. studied the tradeoff between the distortion of the sources and the error of the interference estimation subject to the setting of the joint recovery of a bivariate Gaussian source and interference over the two-user Gaussian degraded broadcast channel in the presence of a common interference.

In this work, considering extremely low delay and low energy consumption requirements, we focus on the zero-delay JSCC communications system for a bivariate Gaussian source over a bandwidth-matched Gaussian broadcast channel with two receivers. Both of the receivers are equipped with a one-bit ADC front end. To the best of our knowledge, there are few works that have investigated this scenario. The main contributions of this work are summarized as follows:

  • Under mean squared error (MSE) distortion criterion, an outer bound on the conditional distortion region is derived.

  • Two types of nonparametric mappings are proposed. The first one is based on the joint optimization between the encoder and decoder under an iterative algorithm. In the second method, the implicit functions for the optimal encoder and decoder are derived. Employing the necessary condition mentioned above, the optimized encoder was obtained using the gradient descent method. To the best of our knowledge, there is no previous work that derives the necessary condition of the optimal encoder for the transmission of correlated Gaussian sources over the broadcast channel with one-bit ADC front ends. Hence, our contribution lies in obtaining an encoder mapping that satisfies the necessary derived condition numerically and reveals its structure in three-dimensional space.

  • Examining the optimized encoder obtained and imitating the property of its structure, we propose a series of parametric function curves applied to the system model. These mappings are easy to implement.

The remainder of the paper is organized as follows. In Section 2, we introduce the system model and explain the problem of interest. Section 3 focuses on the theoretical bounds under the setting of infinite resolution ADC and one-bit ADC front end. In Section 4, the analysis of the necessary conditions of the optimal encoder and decoder for the proposed system model is presented, and the optimized encoder mappings obtained via the aforementioned necessary condition with the use of two different algorithms are discussed. In Section 5, several new parametric mapping structures are presented. In Section 6, numerical results and analyses are provided and Section 7 concludes the paper.

Notation: Throughout the paper, the uppercase and lowercase letters denote random variables and their realizations, respectively. p(·) and Pr(·) represent the probability density function (pdf) and probability, respectively. The standard normal distribution and its pdf are denoted by N(0,1) and ϕ(·), respectively. Q(·) denotes the complementary cumulative distribution function of the standard normal distribution.

2. Problem Formulation

We consider the transmission of correlated Gaussian sources over Gaussian broadcast channels with one-bit ADC at the receivers. The setup is illustrated in Figure 1. Herein X=(X1,X2) denotes a couple of memoryless and stationary bivariate Gaussian sources with zero mean and variance σX2. The covariance matrix of the two sources is presented below,

σX2ρσX2ρσX2σX2, (1)

where ρ[0,1]. The source vector X is transformed into a one-dimensional channel input V with the use of a nonlinear mapping function V=α(X1,X2). The Gaussian memoryless broadcast channel is given by    

Yi=α(X1,X2)+Ni,i=1,2, (2)

as shown in Figure 1, where Yi is the channel output for channel i, and Ni is the AWGN, independent of X1 and X2 for channel i, with zero mean and variance σni2. Without loss of generality, we assume σn12<σn22. At the i-th receiver, the noisy signal Yi is quantized with a one-bit ADC, Γ(.). The output of the ADC is

Zi=Γ(Yi)=0Yi01Yi<0. (3)

The decoder observing the ADC output reconstructs the source Xi as X^i=βi(Zi) where βi· denotes the i-th decoder.

Figure 1.

Figure 1

Broadcasting bivariate Gaussian sources with one-bit ADC front end.

In this paper, we assume that the encoding mapping α follows an average power constraint,

E[α(X1,X2)2]P. (4)

The average MSE distortion measure is used, which is given by

D¯=MSE¯=12i=12Di=12i=12E[(XiX^i)2]. (5)

Our target is to find the optimal source mapping function α and the decoding function βi to minimize the average MSE in (5) subject to the average power constraint in (4).

3. Preliminaries

3.1. The Average Distortion Bound When Infinite Resolution ADC Front Ends Are Adopted

In [27], the authors derived the characterization of the achievable distortion region D(σX2,ρ,P,σn12,σn22). The minimum and maximum values of D1 are deduced as follows,

D1min=σn12σX2P+σn12,D1max=σX2(1ρ2)P+σn12P+σn12. (6)

Then, for each D1[D1min,D1max],

D2(P,D1,σX2,ρ,σn12,σn22)=min(D1,d2)D(σX2,ρ,P,σn12,σn22)d2. (7)

Subsequently, the average distortion can be obtained by D¯=1/2(D1+D2) for this distortion pair (D1,D2). We select the smallest average distortion D¯min as the bound of average distortion for the setting of bivariate Gaussian sources over the broadcast channel with infinite resolution ADC front ends.

3.2. The Average Distortion Bound When One-Bit ADC Front Ends Are Adopted

The genie-aided distortion region for the transmission of correlated Gaussian sources over a GBC with one-bit ADC front ends, DcADC(σX2,ρ,P,σn12,σn22), consists of all pairs of (D1|2ADC,D2ADC) such that

D1|2ADCσX2(1ρ2)22hQγP/σn12hQP/σn12,D2ADCσX2221hQγP/σn22, (8)

for some γ[0,1]. A proof of (8) is given in Appendix A.

In the same way as in Section 3.1, we can obtain the average distortion bound.

4. Nonparametric Mappings

In this section, we proceed to develop two types of nonparametric mappings using the Lagrange multiplier method. We are going to study the optimal mapping such that the average distortion is minimized subject to the average power constraint.

Using the Lagrange multiplier method, we turn the constrained optimization problem of minimizing (5) subject to (4) into an unconstrained problem by forming the Lagrange cost function,

J(α,β1,β2)=i=1212EXiX^i2+λEα(X1,X2)2. (9)

Therefore, our target turns into minimizing the unconstrained problem as

minα,β1,β2J(α,β1,β2). (10)

For a given λ, if the solution of the unconstrained problem (10) satisfies the average power constraint in (4), it is proven that the above solution also solves the constrained problem [32].

Herein, MSEi is expressed as

MSEi=Pr(Zi=0|X1=x1,X2=x2)×p(x1,x2)×(xiβi(0))2dx2dx1 +Pr(Zi=1|X1=x1,X2=x2)×p(x1,x2)×(xiβi(1))2dx2dx1. (11)

The actual transmission power is expressed as

Pact=p(x1,x2)α(x1,x2)dx2dx1. (12)

4.1. Nonparametric Mapping I

Herein, we proceed in a way similar to the vector quantizer design [33] by formulating the necessary conditions for optimality with the use of the discretization operation. This scheme is based on joint optimization with iteration between the mappings at the transmitter and receiver.

Note that the minimization of (10) is still difficult to achieve owing to the interdependencies between the components to be optimized. Therefore, we bypass this problem by optimizing the problem iteratively, one component at a time, while we keep the other components fixed.

Assuming that the decoders (β1,β2) are fixed, the optimal encoding mapping α can be expressed as   

α=argminαi=1212EXiX^i2+λEαX1,X22. (13)

Note that if the joint pdf p(x1,x2) in (12) is nonnegative, the optimization of (13) can be modified in the following way,

α(x1,x2)=argminvR12i=12MSEi+λv2. (14)

Assuming that the encoder α is fixed, the optimal decoder is the minimum MSE (MMSE) estimator of Xi given Zi. The MMSE estimation for user i is given by

x^i=βi(zi)=E[xi|zi]=xip(Xi=xi|Zi=zi)dxi=xipx1px2|x1PrZi=zi|V=αx1,x2dx2dx1p(x1)p(x2|x1)Pr(Zi=zi|V=α(x1,x2))dx2dx1. (15)

The design procedure is given by Algorithm 1. This type of iterative procedure has once been used in other scenarios [34,35]. It is worth noting that the following iterative optimization does not generally guarantee convergence to the global optimum solution. A good choice of initialization can contribute to the avoidance of poor local minima.

Algorithm 1: Nonparametric Mapping I
  • Data: Initial mapping of α(x1,x2), the noise for different channels, and δ, which determines the instant at which the iterations will stop.

  • Result: Locally optimized (α,β1,β2).

graphic file with name entropy-23-01679-i001.jpg

For any given λ, using Algorithm 1 above, we obtain a certain encoder mapping α. The value of λ should be increased if the power E[α(x1,x2)2] exceeds the power constraint P, and vice versa.

For the actual implementation of (14) and (15), we implemented the following modifications and approximations owing to the fact that it is impossible to evaluate the formulas in the real domain. We generate Monte-Carlo samples from the distribution of X, which is denoted as the set X. We discretize the channel input by a set Y with finite modulation points. The maximum/minimum values of set Y are denoted as ±dL12, where L determines the number of points in the set, and d denotes the resolution. As the resolution d becomes smaller and the value L becomes larger, the set Y becomes closer to the analog.

The discretized version of (14) is given by

α(x1,x2) =argminvYPrz1|v×x1β1z122+Prz2|v×x2β2z222+λv2. (16)

The discretized version of (15) can be expressed as

x^1=β1(z1)=x1Xx1x2X2|x1Prx2|x1Prz1|αx1,x2x1Xx2X2|x1Pr(x2|x1)Pr(z1|α(x1,x2)), (17)

and

x^2=β2(z2)=x1Xx2X2|x1x2Pr(x2|x1)Pr(z2|α(x1,x2))x1Xx2X2|x1Pr(x2|x1)Pr(z2|α(x1,x2)). (18)

In our experiment, we use 104 samples to define the set X. We have also used δ=103, and kept d(L1)/2=4 based on the considerations of the power constraint at the transmitter. The value L above is chosen depending on the noise variance, with [1281, 2561] in our experiment, by taking into account the tradeoff between accuracy and computational cost. Figure 2 shows the tradeoff between the value of L and the computational cost. The y axis shows the runtime to obtain the result for one point in Figure 8b when ρ,σn12,σn22 are fixed. Its unit is hours.

Figure 2.

Figure 2

The tradeoff between the value of L and the computational cost.

4.2. Nonparametric Mapping II

In the following subsection, we study the functional properties of the unconstrained problem. We obtain an implicit equation for the optimal encoder mapping. Subsequently, we derive the optimal mappings with the necessary conditions above via gradient descent search.

Our system model is symmetrical to some extent, with respect to the nature of the one-bit ADC output and the probability density distributions of the source and noise. We derive below the optimal decoder with the ADC output being 0 and 1 for channel 1, respectively.

X^10=E[X1|Z1=0]=x1px1,x2PrZ1=0|V=αx1,x2dx2dx1PrZ1=0 (19a)
=x1p(x1,x2)1Qα(x1,x2)σn1dx2dx1Pr(Z1=0)=x1p(x1,x2)Qα(x1,x2)σn1dx2dx1Pr(Z1=0), (19b)

We elaborate (19a) in detail as follows. While Z1=0, it means that Y10. Hence N1α(x1,x2). Then we have

Pr(Z1=0|α(x1,x2))=Pr(N1α(x1,x2))=12πσn1α(x1,x2)ex22σn12dx=Q(1)α(x1,x2)σn1=1Qα(x1,x2)σn1. (20)

In the same way, we could obtain X^11 as

X^11=E[X1|Z1=1]=x1p(x1,x2)PrZ1=1|V=αx1,x2dx2dx1Pr(Z1=1)=x1p(x1,x2)Qα(x1,x2)σn1dx2dx1Pr(Z1=1). (21)

Based on the results above, we can derive the following relationship,

X^10=X^11, (22)
X^20=X^21. (23)

Herein, X^ji denotes the estimation when the ADC output is i for channel j, where i={0,1} and j={1,2}.

The overall average distortion is given by

D¯=12EX1X^12+EX2X^22=12EX1X^1X˜1+EX2X^2X˜2=12EX1X˜1+EX2X˜2=12EX12X1X^1+EX22X^2=12σX12+σX22EX1X^1EX2X^2<12σX12+σX22.

where (24a) is attributed to the orthogonality of the MMSE estimation. Xi denotes the source samples, while X^i and X˜i denote the estimation, and the difference between the source and estimation, respectively. To be more specific, X˜i=XiX^i.

Note that under the MSE distortion criterion, the optimal decoder is the MMSE estimator. The estimation of source Xi, for example, X^1 is obtained as follows,

X^1=β1(z1)=E[X1|Z1=z1]=x1Pr(X1=x1|Z1=z1)dx1=x1Pr(Z1=z1|X1=x1)p(x1)dx1Pr(Z1=z1)=x1p(x1,x2)Pr(Z1=z1|X1=x1,X2=x2)dx2dx1p(x1,x2)Pr(Z1=z1|X1=x1,X2=x2)dx2dx1=x1p(x1,x2)Q1z1+1αx1,x2σn1dx2dx1p(x1,x2)Q(1)z1+1α(x1,x2)σn1dx2dx1. (25)

where (25) is attributed to the fact that Pr(Z1=0|V=α(x1,x2))=Q1α(x1,x2)σn1 while PrZ1=1|V=αx1,x2=Qαx1,x2σn1. See (20).

In a similar way, X^2 is obtained as,

X^2=β2(z2)=x2p(x1,x2)Q1z2+1αx1,x2σn2dx1dx2p(x1,x2)Q1z2+1αx1,x2σn2dx1dx2. (26)

Herein, z1,z2{0,1}. Furthermore, we also notice that the estimation X^i is constant once zi is determined.

Owing to the orthogonality principle of the MMSE estimation, we can verify that Di=σxi2E[XiX^i]. According to it, we can rewrite the Lagrangian cost function and drop the constants that are independent of α,   

L(α)=12E[X1X^1]12E[X2X^2]+λE[α(X1,X2)2]. (27)

Herein, we would like to reemphasize that we use ϕ(·) to denote the pdf of standard normal distribution and ϕ(·,·) to denote the bivariate normal distribution. Q(·) denotes the complementary cumulative distribution function of the standard normal distribution.

By expanding (27), we proceed with the following process,

12EX1X^112EX2X^2+λEα(X1,X2)2=i=1212σx1σx2σni(α(x1,x2)/σnixiβi(1)ϕx1σx1,x2σx2ϕ(niσni)dx1dx2dni+α(x1,x2)/σnixiβi(0)ϕx1σx1,x2σx2ϕ(niσni)dx1dx2dni)+λσx1σx2×ϕx1σx1,x2σx2α2x1,x2dx2dx1=12σx1σx2×x1β11Qαx1,x2σn1+β10Qαx1,x2σn1×ϕx1σx1,x2σx2dx2dx1+12σx1σx2×x2β21Qαx1,x2σn2+β20Qαx1,x2σn2×ϕx1σx1,x2σx2dx2dx1+λσx1σx2ϕx1σx1,x2σx2α2(x1,x2)dx2dx1=1σx1σx2×[x12β11Qαx1,x2σn1+β10Qαx1,x2σn1x22β21Qαx1,x2σn2+β20Qαx1,x2σn2+λαx1,x22]ϕx1σx1,x2σx2dx2dx1.

Given that X^i is a discrete random variable with two values: βi(0) and βi(1), the first part of (28a) holds since

E[XiX^i]=xiβi(0)PrX^i=βi(0)|Xi=xip(xi)dxi+xiβi(1)PrX^i=βi(1)|Xi=xip(xi)dxi=xiβi(0)PrZi=0|Xi=xip(xi)dxi+xiβi(1)PrZi=1|Xi=xip(xi)dxi=1σx1σx2σn1α(x1,x2)/σnixiβi(0)ϕx1σx1,x2σx2ϕ(niσni)dx1dx2dni+α(x1,x2)/σnixiβi(1)ϕx1σx1,x2σx2ϕ(niσni)dx1dx2dni

where ic={1,2}i.

Equation (28b) holds due to the following fact that

1σniα(x1,x2)/σniβi(1)ϕ(niσni)dni+1σniα(x1,x2)/σniβi(0)ϕ(niσni)dni=βi1Qαx1,x2σni+βi0Qαx1,x2σni.

Define F(α(x1,x2),x1,x2) as:

Fαx1,x2,x1,x2=1σx1σx2(x12β1(1)Qα(x1,x2)σn1+β1(0)Qα(x1,x2)σn1x22β2(1)·Qα(x1,x2)σn2+β2(0)·Qα(x1,x2)σn2+λα2(x1,x2)). (29)

Then after putting F(α(x1,x2),x1,x2) into (28c), we can apply the necessary condition as below,

minαL(α)Fαx1,x2,x1,x2ϕx1σx1,x2σx2dx2dx1. (30)

According to the conclusion in Section 7.5 of [36], when the partial derivative of the function F with respect to α, denoted by Fα(α(x1,x2),x1,x2) is equal to 0, the function L(α) reaches the minimum. The partial derivative of the function F with respect to α is obtained as follows,

Fααx1,x2,x1,x2=1σx1σx2×(x12β1(1)12πσn1+β1(0)12πσn1eα(x1,x2)22σn12x22β2(1)12πσn2+β2(0)12πσn2eα(x1,x2)22σn22+2λα(x1,x2)). (31)

After the deformation of (31), the optimal encoder mapping α subject to the MSE distortion criterion must satisfy the implicit equation as below,

42πλα(x1,x2)=x1σn1eα(x1,x2)22σn12β10β11+x2σn2eαx1,x222σn22β20β21. (32)

To find the optimal encoder mapping, we perform the steepest descent search in the opposite direction of the functional derivative of the Lagrangian with respect to the encoder mapping α(x1,x2) as,

αi+1(x1,x2)=αi(x1,x2)μαLαix1,x2, (33)

where i is the iteration index, and μ>0 is the step size.

Hereafter, the gradient of the Lagrangian function L(α) over α is denoted as

αL=42πλα(x1,x2)x1σn1eα2(x1,x2)2σn12β10β11x2σn2eα2x1,x22σn22β20β21. (34)

The overall design procedure for gradient descent search is given by Algorithm 2.

Algorithm 2: Nonparametric Mapping II

Data: Initial mapping of α(x1,x2), and the noise for different channels.

Result: Locally optimized (α,β1,β2).

graphic file with name entropy-23-01679-i002.jpg

5. Parametric Mappings

Compared with the nonparametric mappings, parametric mappings have obvious advantages in terms of their lower computational cost and fixed functional structures. Moreover, they could be updated according to the variations of the signal properties and channel conditions by adjusting their parameters.

Figure 3 and Figure 4 show plots of the optimized encoder mapping with Algorithms 1 and 2, respectively. Herein and in the sequel, we define the CSNR as

CSNR=10log10NPi=1Nσni2. (35)

Figure 3.

Figure 3

Optimized encoder mapping with Algorithm 1 for σn12=0.32, σn22=1, P=1 and ρ=0.7. The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X1=X2.

Figure 4.

Figure 4

Optimized encoder mapping with algorithm 2 for σn12=0.32, σn22=1, P=1 and ρ=0.7. The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X1=X2.

Although the structures of the two nonparametric mappings are not exactly the same, we can still summarize some common characteristics. There exist two flat layers in both nonparametric mappings. Different degrees of deformation can be observed in the middle part of two nonparametric mappings surfaces. While fixing X1=X2, the curve of α(X1,X2) with respect to X1 is shown as Figure 3c and Figure 4c. The shapes of the two nonparametric mappings obtained above inspire us to propose several different parametric encoding schemes.

After examining the nonparametric mappings mentioned above for different CSNRs and the correlation coefficient ρ, we also notice that due to the symmetry of the system, if ρ=1 and σn1=σn2, the problem studied becomes the point-to-point problem presented in [37]. When it comes to the case of ρ=1, σn1=σn2 together with the infinite resolution front end, the problem reduces to the one in [38].

5.1. Linear Transmission

In [8], the linear scheme for the transmission of bivariate Gaussian sources over a GBC is proposed. The encoder mapping for the linear transmission is given by

V=PσX2(ω2+ζ2+2ωζρ)(ωX1+ζX2), (36)

where ω[0,1], and ζ=1ω.

Closed-form expression of the average distortion D¯ of linear transmission is hard to be obtained. We choose to substitute (32) into (15) to obtain D¯.

5.2. Sigmoid-like Function

From Figure 3 and Figure 4, we can observe that there exists a flat platform in the optimized encoding mapping. This feature is similar to the sigmoid function to some extent. Therefore, we propose to adopt the sigmoid-like function, which is defined as

α(x1,x2)=11+e(a1x1+a2x2)12, (37)

where a1 and a2 jointly control the offset angle of the mapping on the X-Y plane and the extension of the mapping surface.

The optimization step can be achieved by an exhaustive search on the parameter space to jointly determine the optimal values for a1 and a2. The results are obtained via Monte-Carlo optimization of parameters a1 and a2 in (37) so that the SDR is maximized.

5.3. Sinh-like Function

The parametric sine-like mapping in [26] is proposed to satisfy individual quality of service requirements in Gaussian broadcast channels. We adapt the parametric curve structure in our setting and propose the new mapping as indicated below,

cb1,b2(t)=UΣ1/2s(t), (38)
s(t)=[sx(t)sy(t)]T=t2sinh(b1t)b2sinh(b1t), (39)

where UHΣU is the eigendecomposition of the covariance matrix, with U the matrix consisting of the eigenvectors as columns, and Σ=diag{η1,η2},η1>η2.

The optimization of b1 and b2 is achieved by exhaustively searching the parameter space.

5.4. Shannon-Kotel’nikov-like Function

Shannon-Kotel’nikov (S-K) mappings are studied in previous works such as [39,40]. For the 2:1 bandwidth reduction scenario, the spiral curve is given by

s(t)=±Δπ(cos(φ(ct))i+sin(φ(ct))j), (40)

where

φ(ωt)=c|t|Δη.

We implement the following modifications to the S-K mapping function so that the pitch of the mapping curve varies. In other words, the (radial) distance between the two spiral arms varies all along instead of keeping constant as in previous works.

s(t)=±t2π(cos(c|t|)i+sin(c|t|)j). (41)

The optimization of c in (41) is achieved by exhaustively searching the parameter space as well.

The curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme are depicted in Figure 5a–d, respectively. Their corresponding two-dimensional representations are depicted in Figure 6a–d, respectively. While fixing X1=X2, the curves of α(X1,X2) with respect to X1 are shown in Figure 7a–d.

Figure 5.

Figure 5

Curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σn12=0.32, σn22=1, P=1 and ρ=0.7. (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.

Figure 6.

Figure 6

The two-dimensional representations of the curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σn12=0.32, σn22=1, P=1 and ρ=0.7. (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.

Figure 7.

Figure 7

When X1=X2, the curves of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σn12=0.32, σn22=1, P=1 and ρ=0.7. (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.

6. Numerical Results

In this section, we present the performances and validate the effectiveness of the nonparametric and parametric mappings introduced in the previous sections. In the following experiments, the overall MSE is still defined as D¯=12(D1+D2), and signal-to-distortion rate (SDR) is defined as 10log10(σX2/D¯). The average distortion bound when the infinite resolution ADC front ends are adopted is denoted as bound A, and as bound B when one-bit ADC front ends are adopted.

According to (35), we change the values of CSNR by fixing the channel noise and by varying the values of transmitting power or vice versa. In the following experiments, without loss of generality, σX is set to 1 as in the cases of other values for σX, normalization can be adopted.

Under the average distortion criterion, we compare the parametric mappings mentioned in Section 5 with two state-of-the-art parametric mappings proposed for the broadcast channel with infinite resolution ADC front ends, the sine-like curve [26] and the alternating sign-scalar quantizer linear coder (AS-SQLC) [28] for different values of CSNR, as shown in Figure 8a. The performance of the sigmoid-like mapping is superior to all the other parametric schemes. Compared with the AS-SQLC scheme and the sine-like scheme, with the exception of the uncoded transmission scheme, the proposed parametric schemes inspired by optimal functional properties yeild better performances.

Figure 8.

Figure 8

Average distortion performance for σn12=0.32, σn22=1, ρ=0.7 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.

In Figure 8b, we compare the sigmoid-like function (37) and the two nonparametric mappings with the conditional outer bound under one-bit ADC and the outer bound in the case of an infinite resolution ADC front end. Figure 8b shows the performance curves of the proposed parametric sigmoid-like function and two nonparametric mappings in terms of SDR versus CSNR with correlation coefficient ρ=0.7. Herein, to vary CSNR, the values of σn12 and σn22 in (35) are fixed to 0.56 and 1, respectively, while the average transmitting power is varied. The bound for the scenario with one-bit ADC front end and the bound for the scenario with an infinite resolution front end are indicated in this figure with purple squares and blue circles, respectively.

We observed that with the increase in CSNR, the bound for an infinite resolution front end is increasingly ahead of the bound under the one-bit ADC front end. Two nonparametric mappings outperform the parametric sigmoid-like mappings, where the nonparametric mapping I leads to the nonparametric mapping II. Meanwhile, the performances of two nonparametric mappings approach the bound under one-bit ADC front end.

We also compare the average distortions by relevant schemes with the increase in CSNR when ρ=0.6 and ρ=0.2 in Figure 9 and Figure 10, respectively. Similarly, the sigmoid-like mappings perform best within all parametric mappings while nonparametric mapping I performs better than nonparametric mapping II.

Figure 9.

Figure 9

Average distortion performance for σn12=0.32, σn22=1, ρ=0.6 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.

Figure 10.

Figure 10

Average distortion performance for σn12=0.32, σn22=1, ρ=0.2 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.

In Figure 11 and Figure 12, we plot the SDR versus correlation coefficient ρ when CSNR is equal to 1.8 and 11.8 dB, respectively. Herein, we have kept the transmitting power P=1, while we change the channel noise, with σn12=0.32 and σn22=1 in Figure 11, and σn12=0.032 and σn22=0.1 in Figure 12.

Figure 11.

Figure 11

Average distortion performance for σn12=0.32, σn22=1, P=1 with optimized values of parameters at different values of ρ. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.

Figure 12.

Figure 12

Average distortion performance for σn12=0.032, σn22=0.1, P=1 with optimized values of parameters at different values of ρ. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.

When CSNR is significantly low (e.g., 1.8 dB) as shown in Figure 11, sigmoid-like mapping outperforms all the other parametric mappings at different correlation coefficient values while the sinh mapping and SK-like mapping are both superior to the remaining parametric ones. With the increase in correlation coefficient ρ, the uncoded scheme lags behind the AS-SQLC scheme and the sine-like scheme.

When CSNR increases to 11.8 dB as shown in Figure 12, the sigmoid-like mapping still yields the best performance within all the parametric mappings, and is inferior to the nonparametric ones, while the gap shrinks with the increase in the correlation coefficient ρ. For large values of the coefficient ρ, the performances of the AS-SQLC and sine-like scheme become closer to those of the proposed parametric mappings, while the uncoded scheme gradually lags behind the AS-SQLC and the sine-like schemes.

As observed in the mentioned figures, the proposed sigmoid-like mapping always yields a better performance than the AS-SQLC mapping and sine-like mapping, which are particularly designed for a broadcast channel with infinite resolution ADC front end. When the correlation coefficient ρ decreases, both the gap between the performances of the nonparametric mappings and parametric ones and the gap between the performances of parametric mappings proposed in this work and the AS-SQLC as well as the sine-like scheme expand.

Note that the nonparametric mapping I has a slight lead in the performance compared to the nonparametric mapping II. This is due to the fact that Algorithm 1 has a higher degree-of-freedom to place points in the channel space than Algorithm 2. The above gain comes at the expense of the computational cost.

As CSNR increases, the parametric sigmoid-like mapping approaches more closely to two nonparametric mappings, indicating less gain from the nonparametric algorithms. We attribute this performance to the fact that as the communication condition improves, the influence of the one-bit ADC front end becomes larger, and becomes harder to be compensated by nonparametric mapping algorithms. In low-CSNR cases, when the influence of channel noise outweighs the impact of the one-bit ADC front end, the performance promotions of the two nonparametric mapping algorithms become more obvious.

Figure 13, Figure 14 and Figure 15 plot the achievable distortion bounds for three parametric mappings, the bound with infinite resolution ADC front ends and the conditional outer bound with one-bit ADC front ends when different values are assigned to ρ. We would like to emphasize that the bounds we discuss here are not average distortion bounds in previous figures. These bounds are obtained by searching for minimal attainable D2 for given D1, as shown in (7). They characterize the attainable distortion regions. To plot the D1-D2 curves for the proposed parametric encoders in Figure 13, Figure 14 and Figure 15, we varied the parameters to obtain a database for D1-D2 pairs. Then for a given value of D1, we document the corresponding minimal value of D2. (It is hard to keep D1 constant in the practical experiments. We obtain the values of D1 around the given one and document the corresponding D2s. Then the minimum D2 is found within these D2s.) Finally, we could plot the complete D1-D2 curves for the proposed parametric encoders.

Figure 13.

Figure 13

Distortion regions (D1,D2) for σn12=0.32, σn22=1, P=1, ρ=0.2 for three parametric mappings and two bounds.

Figure 14.

Figure 14

Distortion regions (D1,D2) for σn12=0.32, σn22=1, P=1, ρ=0.6 for three parametric mappings and two bounds.

Figure 15.

Figure 15

Distortion regions (D1,D2) for σn12=0.32, σn22=1, P=1, ρ=0.7 for three parametric mappings and two bounds.

We observe that the bound for an infinite resolution ADC front end is relatively closer to the bound for a one-bit ADC front end with larger ρ values. The sigmoid-like mapping outperforms the other two parametric mappings and is close to the bound for a one-bit ADC front end when ρ is relatively lower.

Figure 16 and Figure 17 show the encoder structures of the nonparametric mapping I at two CSNR levels, respectively. As our system model can be approximated as a symmetric one, it is an interesting result that the optimized encoder mappings are odd as well. As CSNR increases, the structure of the encoder mapping is gradually distorted. The deformation above indicates the advantage of the nonparametric mappings compared with the parametric ones, since the former ones have a higher degree-of-freedom for placing points in the channel space rather than being restrained within a fixed structure.

Figure 16.

Figure 16

Optimized encoder nonparametric mapping I with CSNR = 0 dB and ρ=0.7. (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X1=X2.

Figure 17.

Figure 17

Optimized encoder nonparametric mapping I with CSNR = 4 dB and ρ=0.7. (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X1=X2.

7. Conclusions

In this work, we consider the transmission of bivariate Gaussian sources over Gaussian broadcast channels with one-bit ADC front ends. The conditional distortion outer bound for this scenario is derived. Two algorithms are proposed to design the nonparametric mappings. The nonparametric mapping I is achieved based on the iterative optimization between the encoder and the decoder. The nonparametric mapping II is achieved by gradient descent search based on the necessary conditions for the optimal encoder. Based on the characteristics of the optimal encoder mappings, we propose several parametric mappings. Despite a certain extent of performance degradation, the parametric mappings proposed herein can be used in place of the nonparametric mappings as they require lower computational cost and are more adaptable to the channel condition variations. Future extensions of this work include the derivation of the closed-form approximations for the mapping distortion, further design of parametric mappings applied to the system with fading channels, and investigations of the performance of the system with higher level ADC front ends.

Appendix A. Proof for Bound under One-Bit ADC Front End

Herein, we derive the genie-aided outer bound while X2n is known at both the transmitter and receiver 1. On the basis of the data processing inequality (DPI), we have

I(X1n;X^1n|X2n)I(X1n;Z1n|X2n). (A1)

The conditional rate-distortion function under the assumption that X2n is known to both encoder and receiver 1, implies the following inequality

I(X1n;X^1n|X2n)n2logσX2(1ρ2)D1|2ADC. (A2)

Due to the existence of the Markov chain relationship (X1n,X2n)VnZ1n, mutual information I(X1n;Z1n|X2n) can be expressed as

I(X1n;Z1n|X2n)=H(Z1n|X2n)H(Z1n|X1n,X2n) =H(Z1n|X2n)H(Z1n|Vn). (A3)

Furthermore, we have the following inequality

H(Z1n)H(Z1n|X2n)H(Z1n|Vn). (A4)

As shown in Lemma 2 of [41], since the quantizer is symmetric, it would not lose the optimality to restrict attention to symmetric input distributions. In doing so, as the quantizer and the noise are already symmetric, the probability mass function (PMF) of Z is also symmetric. Hence, H(Z1)=1.

With the one-bit symmetric quantization, the quantized channel output is

Z1n=Γ(Vn+N1n). (A5)

Since Z1 is binary, it holds that

H(Z1|V)=EPr(Z1=0|V)logPr(Z1=0|V)Pr(Z1=1|V)logPr(Z1=1|V),

then it is easy to derive that

H(Z1|V)=EhQVσn1, (A6)

where Q(x)=12πxexp(t2/2)dt, and h(·) denotes the binary entropy function

h(p)=plog(p)(1p)log(1p),0p1.

The conclusion is also presented in [41].

Since h(Q(·)) is an even function, we have

H(Z1|V)=EhQVσn1=EhQ|V|σn1.

According to [41], the function h(Q(·)) is a convex function. Jensen’s inequality thus implies

H(Z1|V)hQP/σn12=hQCSNR1. (A7)

with equality iff V2=P.

On the other hand, h(Q(0×P/σn12))=1, hence there exists γ[0,1] such that

H(Z1|X2)=hQγP/σn12.

Therefore,

I(X1n;X^1n|X2n)I(X1n;Z1n|X2n) =H(Z1n|X2n)H(Z1n|Vn) =k=1nHZ1,k|X2nZ1k1HZ1,k|VnZ1k1 =(a)k=1nHZ1,k|X2,kHZ1,k|Vk nhQγP/σn12hQP/σn12, (A8)

where γ[0,1] and (a) follows by the sample-by-sample (zero-delay) encoding assumption. As a result, the following inequality holds

12logσX2(1ρ2)D1ADC12logσX2(1ρ2)D1|2ADChQγP/σn12hQP/σn12. (A9)

On the other hand, applying the data processing inequality at receiver 2, we obtain the following inequality

n2logσX2D2ADCI(X2n;X^2n)I(X2n;Z2n). (A10)

Additionally, the mutual information can be expressed as

I(X2n;Z2n)=H(Z2n)H(Z2n|X2n) =nnH(Z2|X2). (A11)

Note that

H(Z2n|X2n)=nhQγP/σn22, (A12)

we can thus have

12logσX2D2ADC1hQγP/σn22. (A13)

Based on (A9) and (A13), we have

D1ADCσX2(1ρ2)22hQγP/σn12hQP/σn12, (A14)
D2ADCσX2221hQγP/σn22. (A15)

Author Contributions

Data curation, W.Z.; Formal analysis, W.Z.; Funding acquisition, X.C.; Methodology, X.C.; Software, W.Z.; Supervision, X.C.; Validation, W.Z.; Visualization, W.Z.; Writing—original draft, W.Z.; Writing—review & editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the support of the National Natural Foundation of China (61301181, U1530120) and the Scientific Research Foundation of Central South University.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ongoing study in this line of research.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Shannon C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948;27:379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x. [DOI] [Google Scholar]
  • 2.Varasteh M., Rassouli B., Simeone O., Gündüz D. Zero-Delay Source-Channel Coding with a Low-Resolution ADC Front End. IEEE Trans. Inf. Theory. 2018;64:1241–1261. doi: 10.1109/TIT.2017.2719708. [DOI] [Google Scholar]
  • 3.Davoli F., Marchese M., Mongelli M. Non-linear coding and decoding strategies exploiting spatial correlation in wireless sensor networks. IET Commun. 2012;6:2198–2207. doi: 10.1049/iet-com.2011.0799. [DOI] [Google Scholar]
  • 4.Giordani M., Polese M., Mezzavilla M., Rangan S., Zorzi M. Toward 6G Networks: Use Cases and Technologies. IEEE Commun. Mag. 2020;58:55–61. doi: 10.1109/MCOM.001.1900411. [DOI] [Google Scholar]
  • 5.Goblick T.J. Theoretical limitations on the transmission of data from analog sources. IEEE Trans. Inf. Theory. 1965;11:558–567. doi: 10.1109/TIT.1965.1053821. [DOI] [Google Scholar]
  • 6.He C., Hu Y., Chen Y., Fan X., Li H., Zeng B. MUcast: Linear Uncoded Multiuser Video Streaming with Channel Assignment and Power Allocation Optimization. IEEE Trans. Circuits Syst. Video Technol. 2020;30:1136–1146. doi: 10.1109/TCSVT.2019.2897649. [DOI] [Google Scholar]
  • 7.Wang K., Wade N. An integration platform for optimised design and real-time control of smart local energy systems; Proceedings of the 2021 12th International Renewable Engineering Conference (IREC); Amman, Jordan. 14–15 April 2021. [Google Scholar]
  • 8.Bross S., Lapidoth A., Tinguely S. Broadcasting correlated Gaussians. IEEE Trans. Inf. Theory. 2010;56:3057–3068. doi: 10.1109/TIT.2010.2048453. [DOI] [Google Scholar]
  • 9.Floor P.A., Kim A.N., Ramstad T.A., Balasingham I. Zero Delay Joint Source Channel Coding for Multivariate Gaussian Sources over Orthogonal Gaussian Channels. Entropy. 2013;15:2129–2161. doi: 10.3390/e15062129. [DOI] [Google Scholar]
  • 10.Casal P.S., Fresnedo O., Castedo L., Frías J.G. Analog Transmission of Correlated Sources over Fading SIMO Multiple Access Channels. IEEE Trans. Commun. 2017;65:2999–3011. doi: 10.1109/TCOMM.2017.2695197. [DOI] [Google Scholar]
  • 11.Hassanin M., Frías J.G. Analog Mappings for Non-Linear Channels with Applications to Underwater Channels. IEEE Trans. Commun. 2020;68:445–455. doi: 10.1109/TCOMM.2019.2948908. [DOI] [Google Scholar]
  • 12.Floor P.A., Kim A.N., Wernersson N., Ramstad T.A., Skoglund M., Balasingham I. Zero-Delay Joint Source-Channel Coding for a Bivariate Gaussian on a Gaussian MAC. IEEE Trans. Commun. 2012;60:3091–3102. doi: 10.1109/TCOMM.2012.071912.110456. [DOI] [Google Scholar]
  • 13.Chen X., Tuncel E. Zero-Delay Joint Source-Channel Coding Using Hybrid Digital-Analog Schemes in the Wyner-Ziv Setting. IEEE Trans. Commun. 2014;62:726–735. doi: 10.1109/TCOMM.2014.011914.130382. [DOI] [Google Scholar]
  • 14.Chen X. Zero-Delay Gaussian Joint Source–Channel Coding for the Interference Channel. IEEE Commun. Lett. 2018;22:712–715. doi: 10.1109/LCOMM.2018.2796073. [DOI] [Google Scholar]
  • 15.Murmann B. ADC Performance Survey 1997–2020. [(accessed on 29 July 2021)]. Available online: http://web.stanford.edu/~murmann/adcsurvey.html.
  • 16.Jeon Y.-S., Do H., Hong S.-N., Lee N. Soft-Output Detection Methods for Sparse Millimeter-Wave MIMO Systems with Low-Precision ADCs. IEEE Trans. Commun. 2019;67:2822–2836. doi: 10.1109/TCOMM.2019.2892048. [DOI] [Google Scholar]
  • 17.Dong P., Zhang H., Xu W., Li G.Y., You X. Performance Analysis of Multiuser Massive MIMO with Spatially Correlated Channels Using Low-Precision ADC. IEEE Commun. Lett. 2018;22:205–208. doi: 10.1109/LCOMM.2017.2761378. [DOI] [Google Scholar]
  • 18.Park S., Simeone O., Sahin O., Shitz S.S. Fronthaul Compression for Cloud Radio Access Networks: Signal processing advances inspired by network information theory. IEEE Signal Process. Mag. 2014;31:69–79. doi: 10.1109/MSP.2014.2330031. [DOI] [Google Scholar]
  • 19.O’Donnell I.D., Brodersen R.W. An ultra-wideband transceiver architecture for low power, low rate, wireless systems. IEEE Trans. Veh. Technol. 2005;54:1623–1631. doi: 10.1109/TVT.2005.854021. [DOI] [Google Scholar]
  • 20.Jeon Y., Lee N., Hong S., Heath R.W. One-Bit Sphere Decoding for Uplink Massive MIMO Systems with One-Bit ADCs. IEEE Trans. Wirel. Commun. 2018;17:4509–4521. doi: 10.1109/TWC.2018.2827028. [DOI] [Google Scholar]
  • 21.Nguyen L.V., Swindlehurst A.L., Nguyen D.H.N. SVM-Based Channel Estimation and Data Detection for One-Bit Massive MIMO Systems. IEEE Trans. Signal Process. 2021;69:2086–2099. doi: 10.1109/TSP.2021.3068629. [DOI] [Google Scholar]
  • 22.Dong Y., Wang H., Yao Y.-D. Channel Estimation for One-Bit Multiuser Massive MIMO Using Conditional GAN. IEEE Commun. Lett. 2021;25:854–858. doi: 10.1109/LCOMM.2020.3035326. [DOI] [Google Scholar]
  • 23.Myers N.J., Tran K.N., Heath R.W. Low-Rank MMWAVE MIMO Channel Estimation in One-Bit Receivers; Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Barcelona, Spain. 4–8 May 2020; pp. 5005–5009. [Google Scholar]
  • 24.Varasteh M., Rassouli B., Simeone O., Gündüz D. Zero-Delay Source-Channel Coding with a 1-Bit ADC Front End and Correlated Receiver Side Information. IEEE Trans. Commun. 2017;65:5429–5444. doi: 10.1109/TCOMM.2017.2747541. [DOI] [Google Scholar]
  • 25.Sevinç C., Tuncel E. On the Analysis of Energy-Distortion Tradeoff for Zero-Delay Transmission over Gaussian Broadcast Channels; Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT); Paris, France. 7–12 July 2019. [Google Scholar]
  • 26.Suárez-Casal P., Fresnedo Ó., Castedo L., García-Frías J. Analog Transmission of Correlated Sources over BC with Distortion Balancing. IEEE Trans. Commun. 2017;65:4871–4885. doi: 10.1109/TCOMM.2017.2726065. [DOI] [Google Scholar]
  • 27.Tian C., Diggavi S., Shamai S. The Achievable Distortion Region of Sending a Bivariate Gaussian Source on the Gaussian Broadcast Channel. IEEE Trans. Inf. Theory. 2011;57:6419–6427. doi: 10.1109/TIT.2011.2165801. [DOI] [Google Scholar]
  • 28.Hassanin M., Garcia-Frias J., Castedo L. Analog joint source channel coding for Gaussian Broadcast Channels; Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Florence, Italy. 4–9 May 2014; pp. 4264–4268. [Google Scholar]
  • 29.Hassanin M., Lu B., Garcia-Frias J. Application of analog joint source-channel coding to broadcast channels; Proceedings of the 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC); Stockholm, Sweden. 28 June–1 July 2015; pp. 610–614. [Google Scholar]
  • 30.Lin Z., Chen X., Lu X., Tan H. A Practical Low Delay Digital Scheme for Wyner-Ziv Coding over Gaussian Broadcast Channel; Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS); Baltimore, MD, USA. 24–26 March 2021; pp. 1–6. [Google Scholar]
  • 31.Abou Saleh A., Alajaji F., Chan W.Y. Source-Interference Recovery Over Broadcast Channels: Asymptotic Bounds and Analog Codes. IEEE Trans. Commun. 2016;8:3406–3418. doi: 10.1109/TCOMM.2016.2582873. [DOI] [Google Scholar]
  • 32.Everett H., III Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res. 1963;11:399–417. doi: 10.1287/opre.11.3.399. [DOI] [Google Scholar]
  • 33.Linde Y., Buzo A., Gary R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980;28:84–95. doi: 10.1109/TCOM.1980.1094577. [DOI] [Google Scholar]
  • 34.Kron J., Alajaji F., Skoglund M. Low-Delay Joint Source-Channel Mappings for the Gaussian MAC. IEEE Commun. Lett. 2014;18:249–252. doi: 10.1109/LCOMM.2013.120413.132088. [DOI] [Google Scholar]
  • 35.Saleh A.A., Alajaji F., Chan W. Power-constrained bandwidth-reduction source-channel mappings for fading channels; Proceedings of the 2012 26th Biennial Symposium on Communications (QBSC); Kingston, ON, Canada. 28–29 May 2012; pp. 85–90. [Google Scholar]
  • 36.Luenberger D.G. Optimization by Vector Space Methods. John Wiley & Sons, Inc.; Hoboken, NJ, USA: 1977. [Google Scholar]
  • 37.Varasteh M., Rassouoli B., Simeone O., Gündüz D. Joint Source channel coding with one-bit ADC front end; Proceedings of the IEEE International Symposium on Information Theory; Barcelona, Spain. 10–15 July 2016; pp. 3062–3066. [Google Scholar]
  • 38.Akyol E., Viswanatha K.B., Rose K., Ramstad T.A. On Zero-Delay Source-Channel Coding. IEEE Trans. Inf. Theory. 2014;60:7473–7489. doi: 10.1109/TIT.2014.2361532. [DOI] [Google Scholar]
  • 39.Hekland F., Floor P.A., Ramstad T.A. Shannon-Kotel’nikov mappings in joint source-channel coding. IEEE Trans. Commun. 2009;57:94–105. doi: 10.1109/TCOMM.2009.0901.070075. [DOI] [Google Scholar]
  • 40.Liang F., Luo C., Xiong R., Zeng W., Wu F. Hybrid Digital–Analog Video Delivery with Shannon–Kotel’nikov Mapping. IEEE Trans. Multimed. 2018;20:2138–2152. doi: 10.1109/TMM.2017.2785264. [DOI] [Google Scholar]
  • 41.Singh J., Dabeer O., Madhow U. On the limits of communication with low-precision analog-to-digital conversion at the receiver. IEEE Trans. Commun. 2009;57:3629–3639. doi: 10.1109/TCOMM.2009.12.080559. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ongoing study in this line of research.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES