Skip to main content
Entropy logoLink to Entropy
. 2020 Jul 11;22(7):762. doi: 10.3390/e22070762

Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments

Yunus Can Gültekin 1,*, Alex Alvarado 1, Frans M J Willems 1
PMCID: PMC7517313  PMID: 33286534

Abstract

Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s error exponent approach. In particular, it has been shown that PAS achieves the capacity of the additive white Gaussian noise channel (Böcherer, 2018). In this work, we revisit the capacity-achieving property of PAS and derive AIRs using weak typicality. Our objective is to provide alternative proofs based on random sign-coding arguments that are as constructive as possible. Accordingly, in our proofs, only some signs of the channel inputs are drawn from a random code, while the remaining signs and amplitudes are produced constructively. We consider both symbol-metric and bit-metric decoding.

Keywords: probabilistic amplitude shaping, achievable information rate, random coding, symbol-metric decoding, bit-metric decoding

1. Introduction

Coded modulation (CM) refers to the design of forward error correction (FEC) codes and high-order modulation formats, which are combined to reliably transmit more than one bit per channel use. Examples of CM strategies include multilevel coding (MLC) [1,2] in which each address bit of the signal point is protected by an individual binary FEC code, and trellis CM [3], which combines the functions of a trellis-based channel code and a modulator. Among many CM strategies, bit-interleaved CM (BICM) [4,5], which combines a high-order modulation format with a binary FEC code using a binary labeling strategy and uses bit-metric decoding (BMD) at the receiver, is the de-facto standard for CM. BICM is included in multiple wireless communication standards such as the IEEE 802.11 [6] and the DVB-S2 [7]. BICM is also currently the de-facto CM alternative for fiber optical communications.

Proposed in [8], probabilistic amplitude shaping (PAS) integrates constellation shaping into existing BICM systems. The shaping gap that exists for the additive white Gaussian noise (AWGN) channel [9] (Ch. 9) can be closed with PAS. To this end, an amplitude shaping block converts binary information strings into shaped amplitude sequences in an invertible manner. Then, a systematic FEC code produces parity bits encoding the binary labels of these amplitudes. These parity bits are used to select the signs, and the combination of the amplitudes and the signs, i.e., probabilistically shaped channel inputs, are transmitted over the channel. PAS has attracted considerable attention in fiber optical communications due to its availability of providing rate adaptivity [10,11].

Achievable information rates (AIRs) of PAS have been investigated in the literature [12,13,14]. It has been shown that the capacity of the AWGN channel can be achieved with PAS, e.g., in [13] (Example 10.4). The achievability proofs in the literature are based on Gallager’s error exponent approach [15] (Ch. 5) or on strong typicality [16] (Ch. 1).

In this work, we provide a random sign-coding framework based on weak-typicality that contains the achievability proofs relevant for the PAS architecture. We also revisit the capacity-achieving property of PAS for the AWGN channel. As explained in Section 2.5, the first main contribution of this paper is to provide a framework that combines the constructive approach to amplitude shaping with randomly-chosen error-correcting codes, where the randomness is concentrated only in the choice of the signs. The second contribution is to provide a unifying framework of achievability proofs to bring together PAS results that are somewhat scattered in the literature, using a single proof technique, which we call the random sign-coding arguments.

This work is organized as follows. In Section 2, we briefly summarize the related literature on CM, AIRs, and PAS and state our contribution. In Section 3, we provide some background information on typical sequences and define a modified (weakly) typical set. In Section 4, we explain the random sign-coding setup. Finally in Section 5, we provide random sign-coding arguments to derive AIRs for PAS and, consequently, show that it achieves the capacity of a discrete-input memoryless channel with a symmetric capacity-achieving distribution. Conclusions are drawn in Section 6.

2. Related Work and Our Contribution

2.1. Notation

Capital letters X are used to denote random variables, while lower case letters x are used to denote their realizations. Underlined capital and lower case letters X_ and x_ are used to denote random vectors and their realizations, respectively. Boldface capital and lower case letters X and x are used to denote collections of random variables and their realizations, respectively. Underlined boldface capital and lower case letters X_ and x_ are used to denote collections of random vectors and their realizations, respectively. Element-wise multiplication of x_ and y_ is denoted by x_y_. Calligraphic letters X represent sets, while XY={xy:xX,yY}. We denote by Xn the n-fold Cartesian product of X with itself, while X×Y is the Cartesian product of X and Y. Probability density and mass functions over X are denoted by p(x). We use 𝟙[·] to indicate the indicator function, which is one when its argument is true and zero otherwise. The entropy of X is denoted by H(X) (in bits), the expected value of X by E[X].

2.2. Achievable Information Rates

For a memoryless channel that is characterized by an input alphabet X, input distribution p(x), and channel law p(y|x), the maximum AIR is the mutual information (MI) I(X;Y) of the channel input X and output Y. Consequently, the capacity of this channel is defined as I(X;Y) maximized over all possible input distributions p(x), typically under an average power constraint, e.g., in [9] (Section 9.1). The MI can be achieved, e.g., with MLC and multi-stage decoding [1,2].

In BICM systems, channel inputs are uniquely labeled with log2|X|=(m+1)-bit binary strings. Here, we assume that |X| is an integer power of two. At the transmitter, the output of a binary FEC code is mapped to channel inputs using this labeling strategy. At the receiver, BMD is employed, i.e., binary labels C=(C1,C2,,Cm+1) are assumed to be independent, and consequently, the symbol-wise decoding metric is written as the product of bit-metrics:

𝕢(x,y)=i=1m+1𝕢i(ci,y). (1)

Since the metric in (1) is in general not proportional to p(y|x), i.e., there is a mismatch between the actual channel law and the one assumed at the receiver, this setup is called mismatched decoding.

Different AIRs have been derived for this so-called mismatched decoding setup. One of these is the generalized MI (GMI) [17,18]:

GMIp(x)=maxs0Elog𝕢(X,Y)sxXp(x)𝕢(x,Y)s, (2)

which reduces to [19] (Thm. 4.11, Coroll. 4.12) and [20]:

GMIp(c1)p(c2)p(cm+1)=i=1m+1I(Ci;Y) (3)

when the bit levels are independent at the transmitter, i.e., p(x)=p(c)=p(c1)p(c2)p(cm+1) where c=(c1,c2,,cm+1), and:

𝕢i(ci,y)=p(y|ci). (4)

The rate (3) is achievable for both uniform and shaped bit levels [5,21]. The problem of computing the bit level distributions that maximize the GMI in (3) was shown to be nonconvex in [22]. The parameter that maximizes (2) to obtain (3) is s=1.

Another AIR for mismatched decoding is the LM (lower bound on the mismatch capacity) rate [18,23]:

LMp(x)=maxs0,r(·)Elog𝕢(X,Y)srXxXp(x)𝕢(x,Y)srx, (5)

where r(·) is a real-valued cost function defined on X. The expectations in (2) and (5) are taken with respect to p(x,y).

When there is dependence among bit levels, i.e., p(x)=p(c)p(c1)p(c2)p(cm+1), the rate [24,25]:

RBMDp(x)=HCi=1m+1H(Ci|Y) (6)

has been shown to be achievable by BMD for any joint input distribution p(c)=p(c1,c2,,cm+1). In [24,25], the achievability of (6) was derived using random coding arguments based on strong typicality [16] (Ch. 1). Later in [26] (Lemma 1), it was shown that (6) is an instance of the so-called LM rate (5) for s=1, the symbol decoding metric (1), bit decoding metrics (4), and the cost function:

r(c1,c2,,cm+1)=i=1m+1p(ci)p(c1,c2,,cm+1). (7)

We note here that RBMD in (6) can be negative as discussed in [26] (Section II-B). In such cases, RBMD cannot be considered as an achievable rate. To avoid this, RBMD is defined as the maximum of (6) and zero in [26] (Equation (1)).

2.3. Probabilistic Amplitude Shaping: Model

PAS [8] is a capacity-achieving CM strategy in which constellation shaping and FEC coding are combined as shown in Figure 1. In PAS, first an amplitude shaping block maps k-bit information strings to n-amplitude shaped sequences a_=(a1,a2,,an) in an invertible manner. These amplitudes are drawn from a 2m-ary alphabet A. The amplitude shaping block can be realized using constant composition distribution matching [27], multiset-partition distribution matching [28], shell mapping [29], enumerative sphere shaping [30], etc.

Figure 1.

Figure 1

Probabilistic amplitude shaping with transmission rate R=k/n+γ bit/1D.

After n amplitudes are generated, binary labels c_1c_2c_m of the amplitudes a_ and an additional γn-bit information string s_i=(s1,s2,,sγn) are fed to a rate (m+γ)/(m+1) systematic FEC encoder. The encoder produces (1γ)n parity bits s_p=(sγn+1,sγn+2,,sn). The additional data bits s_i and the parity bits s_p are used as the signs s_=(s1,s2,,sn) for the amplitudes a_. Finally, probabilistically shaped channel inputs x_=s_a_ are transmitted through the channel. Here, γ is the rate of the additional information in bits per symbol (bit/1D) or, equivalently, the fraction of signs that are selected directly by data bits. The transmission rate of PAS is R=k/n+γ in bit/1D.

2.4. Probabilistic Amplitude Shaping: Achievable Rates

Based on Gallager’s error exponent approach [15] (Ch. 5), AIRs of PAS were investigated in [12,13,14]. In [12], a random code ensemble was considered from which the channel inputs x_ were drawn. Then, the AIR in [12] (Equations (32)–(34)) was derived for a general memoryless decoding metric 𝕢(x,y). It was shown that by properly selecting 𝕢(x,y), I(X;Y) and the rate (6) can be recovered from the derived AIR, and consequently, they can be achieved with PAS.

Computing error exponents for PAS was also the main concern of the work presented in [13] (Ch. 10). The difference from [12] was in the random coding setup. In [13] (Ch. 10), a random code ensemble was considered from which only the signs s_ of the channel inputs were drawn at random. We call this the random sign-coding setup. The error exponent [13] (Equation (10.42)) was then derived again for a general memoryless decoding metric. Error exponents of PAS have also been examined based on the joint source-channel coding (JSCC) setup in [14,31]. Random sign-coding was considered in [14,31], but only with symbol-metric decoding (SMD) and only for the specific case where γ=0.

2.5. Our Contribution

In this work, we derive AIRs of PAS in a random sign-coding framework based on weak typicality [9] (Section 3.1, Section 7.6 and Section 15.2). We first consider basic sign-coding in which amplitudes of the channel inputs are generated constructively while the signs are drawn from a randomly generated code. Basic sign-coding corresponds to PAS with γ=0. Then, we consider modified sign-coding in which only some of the signs are drawn from the random code while the remaining are chosen directly by information bits. Modified sign-coding corresponds to PAS with 0<γ<1. We compute AIRs for both SMD and BMD.

Our first objective is to provide alternative proofs of achievability in which the codes are generated as constructively as possible. In our random sign-coding experiment, both the amplitude sequences (a_) and the sign sequence parts (s_i) that are information bits are constructively produced, and only the remaining signs (s_p) are randomly generated as illustrated in Figure 2. In most proofs of Shannon’s channel coding theorem, channel input sequences (x_) are drawn at random, and the existence of a good code is demonstrated. Therefore, these proofs are not constructive and cannot be used to identify good codes as discussed, e.g., in [32] (Section I) and the references therein. On the other hand, in our proofs using random sign-coding arguments, it is self-evident how—at least a part of—the code should be constructed. Our second objective is to provide a unified framework in which all possible PAS scenarios are considered, i.e., SMD or BMD at the receiver with 0γ<1, and corresponding AIRs are determined using a single technique, i.e., the random sign-coding argument.

Figure 2.

Figure 2

The scope of the random coding experiments considered in this work and in [12,13,14].

Note that our approach differs from the random sign-coding setup considered in [13,14] where all signs (s_i and s_p) were generated randomly, which was called partially systematic encoding in [13] (Ch. 10). We will show later that only s_p needs to be chosen randomly. Furthermore, we define a special type of typicality (B-typicality; see Definition 1 below) that allows us to avoid the mismatched JSCC approach of [14].

3. Preliminaries

3.1. Memoryless Channels

We consider communication over a memoryless channel with discrete input XX and discrete output YY. The channel law is given by:

p(y_|x_)=i=1np(yi|xi). (8)

Later in Example 1, we will also discuss the AWGN channel Y=X+Z where Z is zero-mean Gaussian with variance σ2. In this case, we assume that the channel output Y is a quantized version of the continuous channel output X+Z. Furthermore, we assume that this quantization has a resolution high enough that the discrete-output channel is an accurate model for the underlying continuous-output channel. Therefore, the achievability results we will obtain for discrete memoryless channels carry over to the discrete-input AWGN channel.

3.2. Typical Sequences

We will provide achievability proofs based on weak typicality. In this section, which is based on [9] (Section 3.1, Section 7.6, and Section 15.2), we formally define weak typicality and list its properties that will be used in this paper.

Let ε>0 and n be a positive integer. Consider the random variable X with probability distribution p(x). Then, the (weak) typical set Aεn(X) of length-n sequences with respect to p(x) is defined as:

Aεn(X)x_Xn:1nlogp(x_)H(X)ε, (9)

where:

p(x_)i=1np(xi). (10)

The cardinality of the typical set Aεn(X) satisfies [9] (Thm. 3.1.2):

(1ε)2n(H(X)ε)(a)Aεn(X)(b)2n(H(X)+ε), (11)

where (a) holds for n sufficiently large and (b) holds for all n. For x_Aεn(X), the probability of occurrence can be bounded as [9] (Equation (3.6)):

2n(H(X)+ε)p(x_)2n(H(X)ε). (12)

The idea of typical sets can be generalized for pairs of n-sequences. Now, consider the pair of random variables (X,Y) with probability distribution p(x,y). Then, the typical set Aεn(XY) of pairs of length-n sequences with respect to p(x,y) is defined as:

Aεn(XY){(x_,y_)Xn×Yn:1nlogp(x_)H(X)ε,1nlogp(y_)H(Y)ε,1nlogp(x_,y_)H(X,Y)ε} (13)

where:

p(x_,y_)i=1np(xi,yi), (14)

and where p(x) and p(y) are the marginal distributions that correspond to p(x,y). The cardinality of the typical set Aεn(XY) satisfies [9] (Thm. 7.6.1):

Aεn(XY)2n(H(X,Y)+ε) (15)

for all n. For (x_,y_)Aεn(XY), the probability of occurrence can be bounded in a similar manner to (12) as:

2n(H(X,Y)+ε)p(x_,y_)2n(H(X,Y)ε). (16)

Along the same lines, joint typicality can be extended for collections of n-sequences (X_1,X_2,,X_m) and the corresponding typical set Aεn(X1X2Xm) can be defined similar to how (9) was extended to (13). Then, for (x_1,x_2,,x_m)Aεn(X1X2Xm), the probability of occurrence can be bounded in a similar manner to (16) as:

2n(H(X)+ε)p(x_1,x_2,,x_m)2n(H(X)ε), (17)

where X=(X1,X2,,Xm).

Finally, we fix x_. The conditional (weak) typical set Aεn(Y|x_) of length-n sequences is defined as:

Aεn(Y|x_)=y_:(x_,y_)Aεn(XY). (18)

In other words, Aεn(Y|x_) is the set of all y_ sequences that are jointly typical with x_. For x_Aεn(X) and for sufficiently large n, the cardinality of the conditional typical set Aεn(Y|x_) satisfies [9] (Thm. 15.2.2):

|Aεn(Y|x_)|2n(H(Y|X)+2ε). (19)

Definition 1

(B-typicality). Let the input probability distribution p(u) together with the transition probability distribution p(v|u) determine the joint probability distribution p(u,v)=p(u)p(v|u). Now, we define:

BV,εn(U)=Δu_:u_Aεn(U)andPr(u_,V_)Aεn(UV)U_=u_)1ε, (20)

where V_ is the output sequence of a “channel” p(v|u) when sequence u_ is input.

The set BV,εn(U) in (20) guarantees that a sequence u_ in this B-typical set will with high probability lead to a sequence v_ that is jointly typical with u_. We note that U and/or V can be composite. The set BV,εn(U) has three properties, as stated in Lemma 1, the proof of which is given in Appendix A.

Lemma 1

(B-typicality properties). The set BV,εn(U) in Definition 1 has the following properties:

  • P1:
    For u_BV,εn(U),
    2n(H(U)+ε)p(u_)2n(H(U)ε). (21)
  • P2:
    For n large enough,
    u_BV,εn(U)p(u_)ε.
  • P3:

    |BV,εn(U)|2n(H(U)+ε) holds for all n, while |BV,εn(U)|(1ε)2n(H(U)ε) holds for n large enough.

4. Random Sign-Coding Experiment

We consider 2m+1-ary amplitude shift keying (M-ASK) alphabets X={M+1,M+3,,M1} where M=2m+1. We note that X is symmetric around the origin and can be factorized as X=SA. Here, S={1,+1} and A={+1,+3,,M1} are the sign and amplitude alphabets, respectively. Accordingly, any channel input xX can be written as the multiplication of a sign and an amplitude, i.e., x=sa.

4.1. Random Sign-Coding Setup

We cast the PAS structure shown in Figure 1 as a sign-coding structure as in Figure 3. The sign-coding setup consists of two layers: a shaping layer and a coding layer.

Figure 3.

Figure 3

Sign-coding structure: sign-coding (coder) is combined with amplitude shaping (shaper). SMD, symbol-metric decoding; BMD, bit-metric decoding.

Definition 2

(Sign-coding). For every message index pair (ma,ms), with uniform ma{1,2,,Ma} and uniform ms{1,2,,Ms}, a sign-coding structure as shown in Figure 3 consists of the following.

  • A shaping layer that produces for every message index ma, a length-n shaped amplitude sequence a_(ma) where the mapping is one-to-one. The set of amplitude sequences is assumed to be shaped, but uncoded.

  • An additional n1-bit (uniform) information string in the form of a sign sequence part s_(ms)=(s1(ms),s2(ms),,sn1(ms)) for every message index ms.

  • A coding layer that extends the sign sequence part s_(ms) by adding a second (uniform) sign sequence part s_(ma,ms)=(sn1+1(ma,ms),sn1+2(ma,ms),,sn(ma,ms)) of length-n2 for all ma and ms. This is obtained by using an encoder that produces redundant signs in the set S from a_(ma) and s_(ms). Here, n1+n2=n.

Finally, the transmitted sequence is x_(ma,ms)=a_(ma)s_(ma,ms), where s_(ma,ms)=(s_(ms),s_(ma,ms)). The sign-coding setup with n1=0 (γ=0) is called basic sign-coding, while the setup with n1>0 (γ>0) is called modified sign-coding.

4.2. Shaping Layer

When SMD is employed at the receiver, the shaping layer is as shown in Figure 4. Here, let A be distributed with p(a) over aA. Then, the shaper produces for every message index ma a length-n amplitude sequence a_(ma)BSY,εn(A). We note that for this sign-coding setup, the rate is:

R=1nlog2|MaMs|=γ+1nlog2|BSY,εn(A)|H(A)+γ2ε (22)

where the inequality in (22) follows for n large enough from P3.

Figure 4.

Figure 4

Shaping layer of the random sign-coding setup with SMD.

On the other hand, when BMD is used at the receiver, the shaping layer is as shown in Figure 5. Here, let B=(B1,B2,,Bm) be distributed with p(b)=p(b1,b2,,bm) over (b1,b2,,bm){0,1}m. The shaper produces for every message index ma an n-sequence of m-tuples b_(ma)=(b_1(ma),b_2(ma),,b_m(ma))BSY,εn(B1B2Bm). Then, each m-tuple is mapped to an amplitude sequence a_(ma) by a symbol-wise mapping function f(·). We note that for this sign-coding setup, the rate is:

R=1nlog2|MaMs|=γ+1nlog2|BSY,εn(B)|H(B)+γ2ε (23)

where the inequality in (23) follows for n large enough from P3.

Figure 5.

Figure 5

Shaping layer of the random sign-coding setup with BMD for M-ASK.

To realize f(·), we label the channel inputs with (m+1)-bit strings. The amplitude is addressed by m amplitude bits (B1,B2,,Bm), while the sign is addressed by a sign bit S. The symbol-wise mapping function f(·) in Figure 5 uses the addressing (B1,B2,,Bm)A. We emphasize that unlike the case in Section 2.2, we use (S,B1,B2,,Bm) to denote a channel input instead of (C1,C2,,Cm+1). Amplitudes and signs of xX are tabulated for 8-ASK in Table 1 along with an example of the mapping function f(b1,b2), namely the binary reflected Gray code [19] (Defn. 2.10).

Table 1.

Input alphabet and mapping function for 8-ASK.

A 7 5 3 1 1 3 5 7
S −1 −1 −1 −1 1 1 1 1
X −7 −5 −3 −1 1 3 5 7
B1 0 0 1 1 1 1 0 0
B2 0 1 1 0 0 1 1 0

4.3. Decoding Rules

At the receiver, SMD finds the unique message index pair (m^a,m^s) such that the corresponding amplitude-sign sequence is jointly typical with the received output sequence y_, i.e., (a_(m^a),s_(m^a,m^s),y_)Aεn(ASY).

On the other hand, BMD finds the unique message index pair (m^a,m^s) such that the corresponding bit and sign sequences are (individually) jointly typical with the received output sequence y_, i.e., (s_(m^a,m^s),y_)Aεn(SY) and (b_j(m^a),y_)Aεn(BjY) for j=1,2,,m. We note that the decoder can use bit metrics p(bji=1|yi)=1p(bji=0|yi) for j=1,2,,m and i=1,2,,n to find p(b_j|y_). Here, bji is the jth bit of the ith symbol. Together with p(y_) and p(b_j), the decoder can check whether (b_j,y_)Aεn(BjY). We note that Bj is in general not uniform. A similar statement holds for the uniform sign S.

5. Achievable Information Rates of Sign-Coding

Here, we investigate AIRs of the sign-coding architecture in Figure 3. We consider both SMD and BMD at the receiver. In what follows, four AIRs are presented. The proofs are based on B-typicality, a variation of weak typicality, and random sign-coding arguments and are given in Appendix B. As indicated in Definition 2, signs S are assumed to be uniform in the proofs. We have not applied weak typicality for continuous random variables, discussed in [9] (Section 8.2) and [33] (Section 10.4), since our channels are discrete-input. However, it is also possible to develop a hybrid version of weak typicality that matches with discrete-input continuous-output channels.

In the following, the concept of AIR is formally defined in the sign-coding context.

Definition 3

(Achievable information rate). A rate R is said to be achievable if for every δ>0 and n large enough, there exists a sign-coding encoder and a decoder such that (1/n)log2MaMsRδ and error probability Peδ.

5.1. Sign-Coding with Symbol-Metric Decoding

Theorem 1

(Basic sign-coding with SMD). For a memoryless channel {X,p(y|x),Y} with amplitude shaping and basic sign-coding, the rate:

RSMDγ=0=maxp(a):H(A)I(SA;Y)H(A) (24)

is achievable using SMD.

Theorem 1 implies that for a memoryless channel, the rate R=H(A) is achievable with basic sign-coding, as long as H(A)I(SA;Y)=I(X;Y) is satisfied. For the AWGN channel, this means that a range of rate-SNR pairs are achievable. Here, SNR denotes the signal-to-noise ratio. One of these points, H(A)=I(SA;Y), is on the capacity-SNR curve. Note that here, “capacity” indicates the largest achievable rate using X as the channel input alphabet under the average power constraint. It can be observed from Figure 6 discussed in Example 1 that there indeed exists an amplitude distribution p(a) for which H(A)=I(SA;Y).

Figure 6.

Figure 6

Sign-coding with SMD for 4-ASK. All C4-ASK0.562 bit/1D can be achieved with sign-coding. AIR, achievable information rate.

Theorem 2

(Modified sign-coding with SMD). For a memoryless channel {X,p(y|x),Y} with amplitude shaping and modified sign-coding, the rate:

RSMDγ>0=maxp(a),γ:H(A)+γI(SA;Y)H(A)+γ (25)

is achievable using SMD for γ<1.

Theorem 2 implies that for a memoryless channel, the rate H(A)+γ is achievable with modified sign-coding, as long as R=H(A)+γI(SA;Y)=I(X;Y) is satisfied. For the AWGN channel, this means that all points on the capacity-SNR curve for which H(X|Y)1γ are achievable. This follows from:

H(A)+γI(SA;Y)=H(SA)H(SA|Y)=H(A)+1H(X|Y), (26)

i.e., the constraint in the maximization in (25).

Example 1.

We consider the AWGN channel with average power constraint E[X2]P. Figure 6 shows the capacity of 4-ASK:

C4-ASK=maxp(x):X={3,1,+1,+3},EX2PI(X;Y) (27)

together with the amplitude entropy H(A) of the distribution that achieves this capacity. Here, SNR=E[X2]/σ2, and σ2 is the noise variance. Basic sign-coding achieves capacity only for SNR=0.72 dB, i.e., at the point where H(A)=I(X;Y), which is C4-ASK=0.562 bit/1D. We see from Figure 6 that the shaping gap is negligible around this point, i.e., the capacity C4-ASK of 4-ASK and the MI I(X;Y) for uniform p(x) are virtually the same. On the other hand, this gap is significant for larger rates, e.g., it is around 0.42 dB at 1.6 bit/1D. To achieve rates larger than 0.562 bit/1D on the capacity-SNR curve, modified sign-coding (γ>0) is required. At a given SNR, C4-ASK can be written as C4-ASK=H(A)+γ, i.e., when the H(A) curve is shifted above by γ, the crossing point is again at C4-ASK for that SNR. We also plot the additional rate γ=C4-ASKH(A) in Figure 6. As an example, at SNR=9.74 dB, CASK=H(A)+γ=1.6 can be achieved with modified sign-coding where H(A)=0.9 and γ=0.7. We observe that sign-coding achieves the capacity of 4-ASK for SNR0.72 dB.

5.2. Sign-Coding with Bit-Metric Decoding

The following theorems give AIRs for sign-coding with BMD.

Theorem 3

(Basic sign-coding with BMD). For a memoryless channel {X,p(y|x),Y} with amplitude shaping using M-ASK and basic sign-coding, the rate:

RBMDγ=0=maxp(b):H(B)RBMD(p(x))H(B) (28)

is achievable using BMD. Here, B=(B1,B2,,Bm), p(b)=p(b1,b2,,bm), and p(x)=p(s,b1,b2,,bm), and RBMD(p(x)) is as defined in (6).

Theorem 4

(Modified sign-coding with BMD). For a memoryless channel {X,p(y|x),Y} with amplitude shaping using M-ASK and modified sign-coding, the rate:

RBMDγ>0=maxp(b),γ:H(B)+γRBMD(p(x))H(B)+γ (29)

is achievable using BMD for γ<1.

Theorems 3 and 4 imply that for a memoryless channel, the rate R=H(B)+γ=H(A)+γ is achievable with sign-coding and BMD, as long as RRBMD is satisfied.

Remark 1

(Random sign-coding with binary linear codes). An amplitude can be represented by m bits. We can uniformly generate a code matrix with mn rows of length n. This matrix can be used to produce the sign sequences. This results in the pairwise independence of any two different sign sequences, as is explained in the proof of [15] (Theorem 6.2.1). Inspection of the proof of our Theorem 1 shows that only the pairwise independence of sign sequences is needed. Therefore, achievability can also be obtained with a binary linear code. Note that our linear code can also be seen as a systematic code that generates parity. The code rate of the corresponding systematic code is m/(m+1). For BMD, a similar reasoning shows that linear codes lead to achievability, and also for modified sign-coding, achievability follows for binary linear codes. The rate of the systematic code that corresponds to the modified setting is (m+γ)/(m+1).

6. Conclusions

In this paper, we studied achievable information rates (AIRs) of probabilistic amplitude shaping (PAS) for discrete-input memoryless channels. In contrast to the existing literature in which Gallager’s error exponent approach was followed, we used a weak typicality framework. Random sign-coding arguments based on weak typicality were introduced to upper-bound the probability of error of a so-called sign-coding structure. The achievability of the mutual information was demonstrated for uniform signs, which were independent of the amplitudes. Sign-coding combined with amplitude shaping corresponded to PAS, and consequently, PAS achieved the capacity of a discrete-input memoryless channel with a symmetric capacity-achieving distribution.

Our approach was different than the random coding arguments considered in the literature, in the sense that our motivation was to provide achievability proofs that were as constructive as possible. To this end, in our random sign-coding setup, both the amplitudes and the signs of the channel inputs that were directly selected by information bits were constructively produced. Only the remaining signs were drawn at random. A study on the achievability of capacity for channels with asymmetric capacity-achieving distributions with a type of sign-coding is left for possible future research.

Appendix A. Proof of Lemma 1

Appendix A.1. Proof of P1

We see from [9] (Equation (3.6)) that for u_Aεn(U),

2n(H(U)+ε)p(u_)2n(H(U)ε). (A1)

Due to Definition 1, each u_BV,εn(U) is also in Aεn(U); more specifically, BV,εn(U)Aεn(U). Consequently, (A1) also holds for u_BV,εn(U), which completes the proof of P1.

Appendix A.2. Proof of P2

Let (U_,V_) be independent and identically distributed with respect to p(u,v). Then:

Pr{(U_,V_)Aεn(UV)}=u_p(u_)v_:(u_,v_)Aεn(UV)p(v_|u_)=u_BV,εn(U)p(u_)v_:(u_,v_)Aεn(UV)p(v_|u_) (A2)
+u_BV,εn(U)p(u_)v_:(u_,v_)Aεn(UV)p(v_|u_) (A3)
u_BV,εn(U)p(u_)+u_BV,εn(U)p(u_)(1ε) (A4)
=1ε+εu_BV,εn(U)p(u_) (A5)
=1ε+εPr{U_BV,εn(U)}. (A6)

Here, (A4) follows from Definition 1, which states that Pr(u_,V_)Aεn(UV)|U_=u_<1ε for u_Aεn(U), if u_BV,εn(U). Then, from (A6), we obtain:

Pr{U_BV,εn(U)}Pr{(U,V)Aεn(UV)}1+εε (A7)
=1Pr{(U,V)Aεn(UV)}ε (A8)
1ε. (A9)

for large enough n. Here, (A9) follows from [9] (Thm. 7.6.1), which states that Pr{(U_,V_)Aεn(UV)}1 as n. This implies that Pr{(U_,V_)Aεn(UV)}ε2 for positive ε and large enough n, which completes the proof.

Appendix A.3. Proof of P3

We see from [9] (Thm. 3.1.2) that:

|Aεn(U)|2n(H(U)+ε). (A10)

Since BV,εn(U)Aεn(U), again by Definition 1, (A10) also holds for |BV,εn(U)|. This proves the upper bound in P3. To prove the lower bound, we obtain from (A9) for n sufficiently large that:

1εPr{U_BV,εn(U)} (A11)
u_BV,εn(U)2n(H(U)ε) (A12)
=|BV,εn(U)|2n(H(U)ε), (A13)

where (A12) follows from (A1).

Appendix B. Proofs of Theorems 1, 2, 3, and 4

To derive AIRs, we will follow the classical approach, e.g., as in [9] (Section 7.7), and upper-bound the average of the probability of error P¯e over a random choice of sign-codebooks. This way, we will demonstrate the existence of at least one good sign-code. Again as in [9] (Section 7.7) and as explained in Section 4.3, we decode by joint typicality: the decoder looks for a unique message index pair (m^a,m^s) for which the corresponding amplitude-sign sequence (a_,s_) is jointly typical with the received sequence y_.

By the properties of weak typicality and B-typicality, the transmitted amplitude-sign sequence and the received sequence are jointly typical with high probability for n large enough. We call the event for which the transmitted amplitude-sign sequence is not jointly typical with the received sequence the first error event with average probability P¯e(1). Furthermore, the probability that any other (not transmitted) amplitude-sign sequence is jointly typical with the received sequence vanishes for asymptotically large n. We call the event that there is another amplitude-sign sequence that is jointly typical with the received sequence the second error event with average probability P¯e(2). Observing that these events are not disjoint, we can write [9] (Equation (7.75)):

P¯eP¯e(1)+P¯e(2). (A14)

Appendix B.1. Proof of Theorem 1

For the error of the first kind, we can write:

P¯e(1)=ma=1Ma1Mas_Snp(s_)y_Ynp(y_|a_(ma),s_)𝟙[(a_(ma),s_,y_)Aεn(ASY)] (A15)
=ma1Mas_y_p(s_,y_|a_(ma))𝟙[(a_(ma),s_,y_)Aεn] (A16)
=ma1MaPr(a_(ma),S_,Y_)Aεn|A_=a_(ma) (A17)
maεMa (A18)
=ε, (A19)

where we simplified the notation by replacing ma=1,2,,Ma by ma, s_Sn by s_, and y_Yn by y_ in (A16). Furthermore, we dropped the index of the typical set Aεn(ASY) and used Aεn instead. We will follow these notations for summations and for the typical sets for the rest of the paper, assuming for the latter that the index of the typical set will be clear from the context. To obtain (A16), we used p(s_)p(y_|a_(ma),s_)=p(s_,y_|a_(ma)). Then, (A18) is a direct consequence of Definition 1 since a_(ma)BSY,εn(A) for ma=1,2,,Ma.

For the error of the second kind, we can write:

P¯e(2)ma1Mas_p(s_)y_p(y_|a_(ma),s_)ka=1,kamaMas˜_Snp(s˜_)𝟙[(a_(ka),s˜_,y_)Aεn] (A20)
=Mamas_p(s_)May_p(y_|a_(ma),s_)kamas˜_p(s˜_)Ma𝟙[(a_(ka),s˜_,y_)Aεn]Ma26nεmas_p(a_(ma))p(s_)y_p(y_|a_(ma),s_) (A21)
·kamas˜_p(a_(ka))p(s˜_)𝟙[(a_(ka),s˜_,y_)Aεn] (A22)
Ma26nεa_Ans_p(a_)p(s_)y_p(y_|a_,s_)a˜_Ans˜_p(a˜_)p(s˜_)𝟙[(a˜_,s˜_,y_)Aεn] (A23)
=Ma26nε(y_,x˜_)Aεnp(x˜_)p(y_) (A24)
2n(H(A)+ε)26nε|Aεn(XY)|2n(H(X)ε)2n(H(Y)ε) (A25)
2n(H(A)+7ε)2n(H(X,Y)+ε)2n(H(X)ε)2n(H(Y)ε) (A26)
=2n(H(A)I(SA;Y)+10ε), (A27)

where we simplified the notation by replacing ka=1,2,,Ma:kama by kama, and s˜_Sn by s˜_ in (A21). We will follow these notations for the rest of the paper. Then:

  • (A22)
    follows for n sufficiently large and for a_BSY,εn(A) from:
    1Ma=1|BSY,εn(A)|2n(H(A)ε))1ε (A28)
    =22nε1ε2n(H(A)+ε) (A29)
    22nε1εp(a_) (A30)
    23nεp(a_), (A31)
    where (A28) follows from the B-typicality property P3, (A30) follows from the B-typicality property P1, and (A31) holds for all large enough n.
  • (A23)

    follows from summing over a_An instead of over a_(ma)Bεn and over a˜_An instead of a_(ka)Bεn for kama.

  • (A24)

    is obtained by working out the summations over a_ and s_ and by replacing a˜_s˜_ with x˜_.

  • (A25)

    follows from Ma=|Bεn(A)|2n(H(A)+ε), i.e., the B-typicality property P3, and from (12).

  • (A26)

    follows from (15).

The conclusion from (A27) is that for H(A)<I(X;Y)10ε, the error probability of the second kind:

P¯e(2)ε (A32)

for n large enough. Using (A19) and (A32) in (A14), we find that the total error probability averaged over all possible sign-codes P¯e2ε for n large enough. This implies the existence of a basic sign-code with total error probability Pe=Pr{Ma^Ma}2ε. This holds for all ε>0, and therefore, the rate:

R=H(A)I(X;Y), (A33)

is achievable with basic sign-coding, which concludes the proof of Theorem 1.

Appendix B.2. Proof of Theorem 2

For the error of the first kind, we can write:

P¯e(1)=ma1Mams=1Ms12n1s_Sn2p(s_)y_p(y_|a_(ma),s_(ms)s_)𝟙[(a_(ma),s_(ms)s_,y_)Aεn] (A34)
=ma1Mamss_2ny_p(y_|a_(ma),s_(ms)s_)𝟙[(a_(ma),s_(ms)s_,y_)Aεn] (A35)
=ma1Mamss_y_p(s_(ms)s_,y_|a_(ma))𝟙[(a_(ma),s_(ms)s_,y_)Aεn] (A36)
=ma1MaPr(a_(ma),S_,Y_)Aεn|A_=a_(ma) (A37)
maεMa (A38)
=ε, (A39)

where we simplified the notation by replacing s_Sn2 by s_ and ms=1,2,,Ms by ms in (A35). We will follow these notations for the rest of the paper. To obtain (A35), we used the fact that S_ is uniform; more precisely p(s_)=2n2. To obtain (A36), we used the fact that S_ is also uniform, and then, 2np(y_|a_(ma),s_(ms)s_)=p(s_(ms)s_,y_|a_(ma)). Then, (A38) is a direct consequence of Definition 1 since a_(ma)BSY,εn(A) for ma=1,2,,Ma.

For the error of the second kind, we obtain:

P¯e(2)ma1Mams12n1s_p(s_)y_p(y_|a_(ma),s_(ms)s_)·(ka,ks)(ma,ms)s˜_p(s˜_)𝟙[(a_(ka),s_(ks)s˜_,y_)Aεn]=Ma2n1ma,ms,s_2nMay_p(y_|a_(ma),s_(ms)s_)·(ka,ks)(ma,ms)s˜_2nMa𝟙[(a_(ka),s_(ks)s˜_,y_)Aεn] (A40)
=Ma2n1ma,ms,s_2nMay_p(y_|a_(ma),s_(ms)s_)kama,ks,s˜_2nMa𝟙[(a_(ka),s_(ks)s˜_,y_)Aεn]+2n1ma,ms,s_2nMay_p(y_|a_(ma),s_(ms)s_)ksms,s˜_2n𝟙[(a_(ma),s_(ks)s˜_,y_)Aεn]. (A41)

Here, we replaced nested summations over ma, ms, and s_ by a single summation over (ma,ms,s_) for the sake of better readability. We will use this notation for the rest of the paper. Then:

  • (A40)

    follows from n=n1+n2 and from the fact that S_ is uniform; more precisely, p(s_)=2n2.

  • (A41)

    is obtained by splitting (ka,ks)(ma,ms) into kama,ks and ka=ma,ksms.

From (A41), we obtain:

P¯e(2)Ma2n126nεma,ms,s_p(a_(ma))p(s_(ms)s_)y_p(y_|a_(ma),s_(ms)s_)·kama,ks,s˜_p(a_(ka))p(s_(ks)s˜_)𝟙[(a_(ka),s_(ks)s˜_,y_)Aεn]+2n123nεma,ms,s_p(a_(ma))p(s_(ms)s_)y_p(y_|a_(ma),s_(ms)s_)·ksms,s˜_p(s_(ks)s˜_)𝟙[(a_(ma),s_(ks)s˜_,y_)Aεn] (A42)
Ma2n126nεa_,s_s_p(a_)p(s_s_)y_p(y_|a_,s_s_)a˜_,s_˜s˜_p(a˜_)p(s_˜s˜_)𝟙[(a˜_,s_˜s˜_,y_)Aεn]+2n123nεa_,s_s_p(a_)p(s_s_)y_p(y_|a_,s_s_)s_˜s˜_p(s_˜s˜_)𝟙[(a_,s_˜s˜_,y_)Aεn] (A43)
=Ma2n126nεa_,s_p(a_)p(s_)y_p(y_|a_,s_)a˜_,s˜_p(a˜_)p(s˜_)𝟙[(a˜_,s˜_,y_)Aεn]+2n123nεa_,s_p(a_)p(s_)y_p(y_|a_,s_)s˜_p(s˜_)𝟙[(a_,s˜_,y_)Aεn], (A44)

where:

  • (A42)
    follows for n sufficiently large and for a_BSY,εn(A) from:
    1Ma(A31)23nεp(a_) (A45)
    and from p(s_s_)=2n,
  • (A43)

    follows from summing over a_An instead of over a_(ma)Bεn and over a˜_An instead of a_(ka)Bεn for kama. Moreover, it follows from summing over s_Sn1 instead of s_(ks) for ks=1,2,,Ms and ksms.

  • (A44)

    follows from substituting s_ for s_s_ and s˜_ for s_˜s˜_.

Finally, from (A44), we obtain:

P¯e(2)=Ma2n126nεy_p(y_)x˜_p(x˜_)𝟙[(x˜_,y_)Aεn]+2n123nεa_,y_p(a_,y_)s˜_p(s˜_)𝟙[(a_,s˜_,y_)Aεn] (A46)
2n(H(A)+ε)2nγ26nε|Aεn(XY)|2n(H(X)ε)2n(H(Y)ε)+2nγ23nε|Aεn(SAY)|2n(H(A,Y)ε)2n(H(S)ε) (A47)
2n(H(A)+7ε)2nγ2n(H(X,Y)+ε)2n(H(X)ε)2n(H(Y)ε)+2nγ23nε2n(H(S,A,Y)+ε)2n(H(A,Y)ε)2n(H(S)ε) (A48)
=2n(H(A)+γ+10εI(X;Y))+2n(γ+6εI(S;A,Y)). (A49)

Here, we substituted n1=nγ in (A47). Then:

  • (A46)

    is obtained by working out the summations over a_,s_ in the first part and s_ in the second part. Moreover, we replaced a˜_s˜_ with x˜_.

  • (A47)

    is obtained using for the first part that Ma=|Bεn(A)|2n(H(A)+ε), i.e., the B-typicality property P3, and (12). For the second part, we used (12) for p(s_) and (16) for p(a_,y_).

  • (A48)

    follows from (15), and its extension to jointly typical triplets; more precisely, |Aεn(SAY)|2n(H(S,A,Y)+ε).

The conclusion from (A49) is that for H(A)+γ<I(X;Y)10ε and γ<I(S;A,Y)6ε, the error probability of the second kind:

P¯e(2)ε, (A50)

for n large enough. The first constraint, i.e., H(A)+γ<I(X;Y)10ε, already implies the second constraint, i.e., γ<I(S;A,Y)6ε, since:

γ<I(X;Y)H(A)10εI(S,A;Y)I(A;Y)10ε (A51)
=I(S;Y|A)10ε (A52)
I(S;Y|A)+I(S;A)10ε (A53)
=I(S;A,Y)10ε, (A54)

where we substituted (S,A) for X in (A51). Here, (A51) follows from [9] (Thm. 2.4.1), and both (A52) and (A54) follow from the chain rule for MI [9] (Thm. 2.5.2).

Using (A39) and (A50) in (A14), we find that the total error probability averaged over all possible modified sign-codes P¯e2ε for n large enough. This implies the existence of a modified sign-code with total error probability Pe=Pr{(M^a,M^s)(Ma,Ms)}2ε. This holds for all ε>0, and thus, the rate:

R=H(A)+γI(X;Y), (A55)

is achievable with modified sign-coding, which concludes the proof of Theorem 2.

Appendix B.3. Proof of Theorem 3

For the error of the first kind, we can write:

P¯e(1)=ma1Mas_p(s_)y_p(y_|b_(ma),s_)·𝟙[((b_1(ma),y_)Aεn)((b_2(ma),y_)Aεn)((b_m(ma),y_)Aεn)((s_,y_)Aεn)] (A56)
ma1Mas_y_p(s_,y_|b_(ma))𝟙[(b_(ma),s_,y_)Aεn] (A57)
=ma1MaPr(b_(ma),S_,Y_)Aεn|B_=b_(ma) (A58)
maεMa (A59)
=ε, (A60)

where we used b_(ma) to denote (b_1(ma),b_2(ma),,b_m(ma)) in (A56) and B_ to denote (B_1,B_2,,B_m) in (A58). Then, we used p(s_)p(y_|b_(ma),s_)=p(s_,y_|b_(ma)) in (A57). Here, (A57) follows from the fact that if at least one of b_1(ma),b_2(ma),,b_m(ma) or s_ is not jointly typical with y_, then (b_(ma),s_,y_) is not jointly typical. Then, (A59) is a direct consequence of Definition 1 since b_(ma)BSY,εn(B1B2Bm) for ma=1,2,,Ma.

For the error of the second kind, we can write:

P¯e(2)ma1Mas_p(s_)y_p(y_|b_(ma),s_)·kamas˜_p(s˜_)𝟙[(b_1(ka),y_)Aεn,(b_2(ka),y_)Aεn,,(b_m(ka),y_)Aεn,(s˜_,y_)Aεn]=Mamas_p(s_)May_p(y_|b_(ma),s_)·kamas˜_p(s˜_)Ma𝟙[(b_1(ka),y_)Aεn,(b_2(ka),y_)Aεn,,(b_m(ka),y_)Aεn,(s˜_,y_)Aεn]Ma26nεmas_p(b_(ma))p(s_)y_p(y_|b_(ma),s_) (A61)
·kamas˜_p(s˜_)p(b_(ka))𝟙[(b_1(ka),y_)Aεn,(b_2(ka),y_)Aεn,,(b_m(ka),y_)Aεn,(s˜_,y_)Aεn]Ma26nεb_{0,1}mns_p(b_)p(s_)y_p(y_|b_,s_) (A62)
·b_˜{0,1}mns˜_p(s˜_)p(b_˜)𝟙[(b˜_1,y_)Aεn,(b˜_2,y_)Aεn,,(b˜_m,y_)Aεn,(s˜_,y_)Aεn]=Ma26nεy_p(y_)b_˜,s˜_p(b_˜,s˜_)𝟙[(b˜_1,y_)Aεn,(b˜_2,y_)Aεn,,(b˜_m,y_)Aεn,(s˜_,y_)Aεn] (A63)
2n(H(B)+7ε)|Aεn(Y)|2n(H(Y)ε)·|Aεn(B1|y_)||Aεn(B2|y_)|··|Aεn(Bm|y_)||Aεn(S|y_)|2n(H(B,S)ε) (A64)
2n(H(B)+7ε)2n(H(Y)+ε)2n(H(Y)ε)·2n(H(B1|Y)+H(B2|Y)++H(Bm|Y)+H(S|Y)+2(m+1)ε)2n(H(B,S)ε) (A65)
=2n(H(B)H(B,S)+H(B1|Y)+H(B2|Y)++H(Bm|Y)+H(S|Y)+(12+2m)ε), (A66)

where we used b_ to denote (b_1,b_2,,b_m) and b_˜ to denote (b˜_1,b˜_2,,b˜_m) in (A62). We also used B to denote (B1,B2,,Bm) in (A64). Finally, we simplified the notation by replacing b_˜{0,1}mn by b_˜ in (A63). Then:

  • (A61)

    follows for n sufficiently large and for b_BSY,εn(B) from 1/Ma23nεp(b_), which can be shown in a similar way as (A31) was derived.

  • (A62)

    follows from summing over b_{0,1}mn instead of over b_(ma)Bεn and over b_˜{0,1}mn instead of over b_(ka)Bεn for kama.

  • (A63)

    is obtained by working out the summations over b_1,b_2,,b_m, and s_.

  • (A64)

    follows from Ma=|Bεn(B)|2n(H(B)+ε), i.e., the B-typicality property P3, from (12), and from (17).

  • (A65)

    follows from (11) and (19).

The conclusion from (A66) is that for:

H(B)<H(B,S)H(S|Y)i=1mH(Bi|Y)(12+2m)ε=RBMD(p(b,s))(12+2m)ε,

the error probability of the second kind:

P¯e(2)ε (A67)

for n large enough. Using (A60) and (A67) in (A14), we find that the total error probability averaged over all possible sign-codes P¯e2ε for n large enough. This implies the existence of a sign-code with total error probability Pe=Pr{M^aMa}2ε. This holds for all ε>0, and thus, the rate:

R=H(B)RBMD (A68)

is achievable with sign-coding and BMD, which concludes the proof of Theorem 3.

Appendix B.4. Proof of Theorem 4

For the error of first kind, we can write:

P¯e(1)=ma1Mams12n1s_p(s_)y_p(y_|b_(ma),s_(ms)s_)·𝟙i=1m((b_i(ma),y_)Aεn)((s_(ms)s_,y_)Aεn)=ma1Mamss_2ny_p(y_|b_(ma),s_(ms)s_)·𝟙i=1m((b_i(ma),y_)Aεn)((s_(ms)s_,y_)Aεn) (A69)
ma1Mamss_y_p(s_(ms)s_,y_|b_(ma))𝟙[(b_(ma),s_(ms)s_,y_)Aεn] (A70)
=ma1MaPr{(b_(ma),S_,Y_)Aεn|B_=b_(ma)}maεMa (A71)
=ε. (A72)

Here, to obtain (A69), we used the fact that S_ is uniform; more precisely, p(s_)=2n2. Then, we used 2np(y_|b_(ma),s_(ms)s_)=p(s_(ms)s_,y_|b_(ma)) in (A70). Furthermore, (A70) also follows from the fact that if at least one of b_1(ma),b_2(ma),,b_m(ma) or s_(ms)s_ is not jointly typical with y_, then (b_(ma),s_(ms)s_,y_) is not jointly typical. Then, (A71) is a direct consequence of Definition 1 since b_(ma)BSY,εn(B1B2Bm) for ma=1,2,,Ma.

For the error of second kind, we can write:

P¯e(2)ma1Mams12n1s_p(s_)y_p(y_|b_(ma),s_(ms)s_)·(ka,ks)(ma,ms)s˜_p(s˜_)𝟙i=1m((b_i(ka),y_)Aεn)((s_(ks)s˜_,y_)Aεn)=Ma2n1ma,ms,s_2nMay_p(y_|b_(ma),s_(ms)s_)·(ka,ks)(ma,ms)s˜_2nMa𝟙i=1m((b_i(ka),y_)Aεn)((s_(ks)s˜_,y_)Aεn) (A73)
=Ma2n1ma,ms,s_2nMay_p(y_|b_(ma),s_(ms)s_)·kama,ks,s˜_2nMa𝟙i=1m((b_i(ka),y_)Aεn)((s_(ks)s˜_,y_)Aεn)+2n1ma,ms,s_2nMay_p(y_|b_(ma),s_(ms)s_)·ksms,s˜_2n𝟙i=1m((b_i(ma),y_)Aεn)((s_(ks)s˜_,y_)Aεn), (A74)

where (A73) follows from n=n1+n2 and from the fact that S_ is uniform; more precisely, p(s_)=2n2. Then, (A74) is obtained by splitting (ka,ks)(ms,ms) into kams,ks and ka=ma,ksms.

From (A74), we obtain:

P¯e(2)Ma2n126nεma,ms,s_p(b_(ma))p(s_(ms)s_)y_p(y_|b_(ma),s_(ms)s_)·kama,ks,s˜_p(b_(ka))p(s_(ks)s˜_)𝟙i=1m((b_i(ka),y_)Aεn)((s_(ks)s˜_,y_)Aεn)+2n123nεma,ms,s_p(b_(ma))p(s_(ms)s_)y_p(y_|b_(ma),s_(ms)s_)·ksms,s˜_p(s_(ks)s˜_)𝟙i=1m((b_i(ma),y_)Aεn)((s_(ks)s˜_,y_)Aεn) (A75)
Ma2n126nεb_,s_s_p(b_)p(s_s_)y_p(y_|b_,s_s_)b_˜,s_˜s˜_p(b_˜)p(s_˜s˜_)·𝟙i=1m((b_˜i,y_)Aεn)((s_˜s˜_,y_)Aεn)+2n123nεb_,s_s_p(b_)p(s_s_)y_p(y_|b_,s_s_)s_˜s˜_p(s_˜s˜_)·𝟙i=1m((b_i,y_)Aεn)((s_˜s˜_,y_)Aεn) (A76)
=Ma2n126nεb_,s_p(b_)p(s_)y_p(y_|b_,s_)b_˜,s˜_p(b_˜)p(s˜_)𝟙i=1m((b_˜i,y_)Aεn)((s˜_,y_)Aεn)+2n123nεb_,s_p(b_)p(s_)y_p(y_|b_,s_)s˜_p(s˜_)𝟙i=1m((b_i,y_)Aεn)((s˜_,y_)Aεn), (A77)

where:

  • (A75)

    follows for n sufficiently large and for b_BSY,εn(B) from 1/Ma23nεp(b_) and from p(s_s_)=2n,

  • (A76)

    follows from summing over b_{0,1}mn instead of over b_(ma)Bεn and over b_˜{0,1}mn instead of b_(ka)Bεn for kama. Moreover, it follows from summing over s_Sn1 instead of s_(ks) for ks=1,2,,Ms and ksms,

  • (A77)

    follows from substituting s_ for s_s_ and s˜_ for s_˜s˜_.

Finally, from (A77), we obtain:

P¯e(2)=Ma2n126nεy_p(y_)b_˜,s˜_p(b_˜,s˜_)𝟙i=1m((b_˜i,y_)Aεn)((s˜_,y_)Aεn)+2n123nεb_,y_p(b_,y_)s˜_p(s˜_)𝟙i=1m((b_i,y_)Aεn)((s˜_,y_)Aεn) (A78)
2n(H(B)+ε)2nγ26nε|Aεn(Y)|2n(H(Y)ε)i=1m|Aεn(Bi|y_)||Aεn(S|y_)|2n(H(B1B2BmS)ε)+2nγ23nε|Aεn(Y)|2n(H(BY)ε)2n(H(S)ε)i=1m|Aεn(Bi|y_)||Aεn(S|y_)| (A79)
2n(H(B)+ε)2nγ26nε2n(H(Y)+ε)2n(H(Y)ε)i=1m2n(H(Bi|Y)+2ε)2n(H(S|Y)+2ε)2n(H(BS)ε)+2nγ23nε2n(H(Y)+ε)2n(H(BY)ε)2n(H(S)ε)i=1m2n(H(Bi|Y)+2ε)2n(H(S|Y)+2ε) (A80)
=2nH(B)+γ+i=1mH(Bi|Y)+H(S|Y)H(BS)+(12+2m)ε+2nγ+H(Y)H(BY)H(S)+i=1mH(Bi|Y)+H(S|Y)+(8+2m)ε. (A81)

Here, we substituted n1=nγ in (A79). Then:

  • (A78)

    is obtained by working out the summations over b_1,b_2,,b_m,s_ in the first part and s_ in the second part.

  • (A79)

    is obtained using for the first part that Ma=|Bεn(B)|2n(H(B)+ε), i.e., the B-typicality property P3, (12) for p(y_), and (17) for p(b_˜,s˜_). For the second part, we used (12) for p(s˜_) and (17) for p(b_,y_).

  • (A80)

    follows from (11) and (19).

The conclusion from (A81) is that for:

H(B)+γRBMD(12+2m)ε, (A82)

and for:

γH(BY)+H(S)H(Y)i=1mH(Bi|Y)H(S|Y)(8+2m)ε, (A83)

the error probability of the second kind:

P¯e(2)ε (A84)

for n large enough. The second constraint (A83) is already implied by the first constraint (A82) since:

γH(BY)+H(S)H(Y)i=1mH(Bi|Y)H(S|Y)(8+2m)ε (A85)
=H(BY)+H(S)H(Y)i=1mH(Bi|Y)H(S|Y)+H(BS)H(BS)(8+2m)ε (A86)
=H(BY)+H(S)H(Y)+RBMDH(B)H(S)(8+2m)ε (A87)
=H(B|Y)+RBMDH(B)(8+2m)ε. (A88)

Using (A72) and (A84) in (A14), we find that the total error probability averaged over all possible modified sign-codes P¯e2ε for n large enough. This implies the existence of a modified sign-code with total error probability Pe=Pr{(M^a,M^s)(Ma,Ms)}2ε. This holds for all ε>0, and thus, the rate:

R=H(B)+γRBMD, (A89)

is achievable with modified sign-coding, which concludes the proof of Theorem 4.

Author Contributions

Conceptualization, Y.C.G. and F.M.J.W.; formal analysis, Y.C.G., A.A., and F.M.J.W.; software, Y.C.G.; writing, original draft, Y.C.G. and F.M.J.W.; writing, review and editing, Y.C.G., A.A., and F.M.J.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Y.C.G. and A.A. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 757791).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Imai H., Hirakawa S. A new multilevel coding method using error-correcting codes. IEEE Trans. Inf. Theory. 1977;23:371–377. doi: 10.1109/TIT.1977.1055718. [DOI] [Google Scholar]
  • 2.Wachsmann U., Fischer R.F.H., Huber J.B. Multilevel codes: Theoretical concepts and practical design rules. IEEE Trans. Inf. Theory. 1999;45:1361–1391. doi: 10.1109/18.771140. [DOI] [Google Scholar]
  • 3.Ungerböck G. Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theory. 1982;28:55–67. doi: 10.1109/TIT.1982.1056454. [DOI] [Google Scholar]
  • 4.Zehavi E. 8-PSK trellis codes for a Rayleigh channel. IEEE Trans. Commun. 1992;40:873–884. doi: 10.1109/26.141453. [DOI] [Google Scholar]
  • 5.Caire G., Taricco G., Biglieri E. Bit-interleaved coded modulation. IEEE Trans. Inf. Theory. 1998;44:927–946. doi: 10.1109/18.669123. [DOI] [Google Scholar]
  • 6.IEEE Standard for Information Technology—Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. IEEE Standards Association; Piscataway, NJ, USA: 2016. pp. 1–3534. IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012) [DOI] [Google Scholar]
  • 7.Digital Video Broadcasting (DVB); 2nd Generation Framing Structure, Channel Coding and Modulation Systems for Broadcasting, Interactive Services, News Gathering and Other Broadband Satellite Applications (DVB-S2) European Telecommunications Standards Institute; Valbonne, France: 2009. European Telecommun. Standards Inst. (ETSI) Standard EN 302 307, Rev. 1.2.1. [Google Scholar]
  • 8.Böcherer G., Steiner F., Schulte P. Bandwidth efficient and rate-matched low-density parity-check coded modulation. IEEE Trans. Commun. 2015;63:4651–4665. doi: 10.1109/TCOMM.2015.2494016. [DOI] [Google Scholar]
  • 9.Cover T.M., Thomas J.A. Elements of Information Theory. 2nd ed. John Wiley & Sons; Hoboken, NJ, USA: 2006. [Google Scholar]
  • 10.Buchali F., Steiner F., Böcherer G., Schmalen L., Schulte P., Idler W. Rate adaptation and reach increase by probabilistically shaped 64-QAM: An experimental demonstration. J. Lightw. Technol. 2016;34:1599–1609. doi: 10.1109/JLT.2015.2510034. [DOI] [Google Scholar]
  • 11.Idler W., Buchali F., Schmalen L., Lach E., Braun R., Böcherer G., Schulte P., Steiner F. Field trial of a 1 Tb/s super-channel network using probabilistically shaped constellations. J. Lightw. Technol. 2017;35:1399–1406. doi: 10.1109/JLT.2017.2664581. [DOI] [Google Scholar]
  • 12.Böcherer G. Achievable rates for probabilistic shaping. arXiv. 20181707.01134 [Google Scholar]
  • 13.Böcherer G. Habilitation Thesis. TUM Department of Electrical and Computer Engineering Technical University of Munich; Munich, Germany: 2018. Principles of Coded Modulation. [Google Scholar]
  • 14.Amjad R.A. Information rates and error exponents for probabilistic amplitude shaping; Proceedings of the 2018 IEEE Information Theory Workshop (ITW); Guangzhou, China. 25–29 November 2018. [Google Scholar]
  • 15.Gallager R.G. Information Theory and Reliable Communication. John Wiley & Sons; New York, NY, USA: 1968. [Google Scholar]
  • 16.Kramer G. Topics in multi-user information theory. Found. Trends Commun. Inf. Theory. 2008;4:265–444. doi: 10.1561/0100000028. [DOI] [Google Scholar]
  • 17.Kaplan G., Shamai S. Information rates and error exponents of compound channels with application to antipodal signaling in a fading environment. AËU Archiv für Elektronik und Übertragungstechnik. 1993;47:228–239. [Google Scholar]
  • 18.Merhav N., Kaplan G., Lapidoth A., Shamai S. On information rates for mismatched decoders. IEEE Trans. Inf. Theory. 1994;40:1953–1967. doi: 10.1109/18.340469. [DOI] [Google Scholar]
  • 19.Szczecinski L., Alvarado A. Bit-Interleaved Coded Modulation: Fundamentals, Analysis, and Design. John Wiley & Sons; Chichester, UK: 2015. [Google Scholar]
  • 20.Martinez A., Guillén i Fàbregas A., Caire G., Willems F.M.J. Bit-interleaved coded modulation revisited: A mismatched decoding perspective. IEEE Trans. Inf. Theory. 2009;55:2756–2765. doi: 10.1109/TIT.2009.2018177. [DOI] [Google Scholar]
  • 21.Guillén i Fàbregas A., Martinez A. Bit-Interleaved Coded Modulation with Shaping; Proceedings of the 2010 IEEE Information Theory Workshop; Dublin, Ireland. 30 August–3 September 2010. [Google Scholar]
  • 22.Alvarado A., Brännström F., Agrell E. High SNR bounds for the BICM capacity; Proceedings of the 2011 IEEE Information Theory Workshop; Paraty, Brazil. 16–20 October 2011. [Google Scholar]
  • 23.Peng L. Ph.D. Thesis. University of Cambridge; Cambridge, UK: 2012. Fundamentals of Bit-Interleaved Coded Modulation and Reliable Source Transmission. [Google Scholar]
  • 24.Böcherer G. Probabilistic signal shaping for bit-metric decoding; Proceedings of the 2014 IEEE International Symposium on Information Theory; Honolulu, HI, USA. 29 June–4 July 2014. [Google Scholar]
  • 25.Böcherer G. Probabilistic signal shaping for bit-metric decoding. arXiv. 20141401.6190 [Google Scholar]
  • 26.Böcherer G. Achievable rates for shaped bit-metric decoding. arXiv. 20161410.8075 [Google Scholar]
  • 27.Schulte P., Böcherer G. Constant composition distribution matching. IEEE Trans. Inf. Theory. 2016;62:430–434. doi: 10.1109/TIT.2015.2499181. [DOI] [Google Scholar]
  • 28.Fehenberger T., Millar D.S., Koike-Akino T., Kojima K., Parsons K. Multiset-partition distribution matching. IEEE Trans. Commun. 2019;67:1885–1893. doi: 10.1109/TCOMM.2018.2881091. [DOI] [Google Scholar]
  • 29.Schulte P., Steiner F. Divergence-optimal fixed-to-fixed length distribution matching with shell mapping. IEEE Wirel. Commun. Lett. 2019;8:620–623. doi: 10.1109/LWC.2018.2890595. [DOI] [Google Scholar]
  • 30.Gültekin Y.C., van Houtum W.J., Koppelaar A., Willems F.M.J. Enumerative sphere shaping for wireless communications with short packets. IEEE Trans. Wirel. Commun. 2020;19:1098–1112. doi: 10.1109/TWC.2019.2951139. [DOI] [Google Scholar]
  • 31.Amjad R.A. Information Rates and Error Exponents for Probabilistic Amplitude Shaping. arXiv. 20181802.05973 [Google Scholar]
  • 32.Shulman N., Feder M. Random coding techniques for nonrandom codes. IEEE Trans. Inf. Theory. 1999;45:2101–2104. doi: 10.1109/18.782147. [DOI] [Google Scholar]
  • 33.Yeung R. Information Theory and Network Coding. Springer; Boston, MA, USA: 2008. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES