Skip to main content
Entropy logoLink to Entropy
. 2019 May 5;21(5):469. doi: 10.3390/e21050469

Information Theoretic Security for Shannon Cipher System under Side-Channel Attacks

Bagus Santoso 1,*,, Yasutada Oohama 1,
PMCID: PMC7514958  PMID: 33267183

Abstract

In this paper, we propose a new theoretical security model for Shannon cipher systems under side-channel attacks, where the adversary is not only allowed to collect ciphertexts by eavesdropping the public communication channel but is also allowed to collect the physical information leaked by the devices where the cipher system is implemented on, such as running time, power consumption, electromagnetic radiation, etc. Our model is very robust as it does not depend on the kind of physical information leaked by the devices. We also prove that in the case of one-time pad encryption, we can strengthen the secrecy/security of the cipher system by using an appropriate affine encoder. More precisely, we prove that for any distribution of the secret keys and any measurement device used for collecting the physical information, we can derive an achievable rate region for reliability and security such that if we compress the ciphertext using an affine encoder with a rate within the achievable rate region, then: (1) anyone with a secret key will be able to decrypt and decode the ciphertext correctly, but (2) any adversary who obtains the ciphertext and also the side physical information will not be able to obtain any information about the hidden source as long as the leaked physical information is encoded with a rate within the rate region. We derive our result by adapting the framework of the one helper source coding problem posed and investigated by Ahlswede and Körner (1975) and Wyner (1975). For reliability and security, we obtain our result by combining the result of Csizár (1982) on universal coding for a single source using linear codes and the exponential strong converse theorem of Oohama (2015) for the one helper source coding problem.

Keywords: information theoretic security, side-channel attacks, Shannon cipher system, one helper source coding problem, strong converse theorem

1. Introduction

In most of theoretical security models for encryption schemes, the adversary only obtains information from the public communication channel. In such models, an adversary is often treated as an entity that tries to obtain information about the hidden source only from the ciphertexts that are sent through the public communication channel. However, in the real world, the encryption schemes are implemented on physical electronic devices, and it is widely known that any process executed in an electronic circuit will generate a certain kind of correlated physical phenomena as “side” effects, according to the type of process. For example, differences in inputs to a process in an electronic circuit can induce differences in the heat, power consumption, and electromagnetic radiation generated as byproducts by the devices. Therefore, we may consider that an adversary who has a certain degree of physical access to the devices may obtain some information on very sensitive hidden data, such as the keys used for the encryption, just by measuring the generated physical phenomena using appropriate measurement devices. More precisely, an adversary may deduce the value of the bits of the key by measuring the differences in the timing of the process of encryption or the differences in the power consumption, electromagnetic radiation, and other physical phenomena. This information channel where the adversary obtains data in the form of physical phenomena is called the side-channel, and attacks using the side-channel are known as side-channel attacks.

In the literature, there have been many works showing that adversaries have succeeded in breaking the security of cryptographic systems by exploiting side-channel information such as running time, power consumption, and electromagnetic radiation in the real physical world [1,2,3,4,5].

1.1. Our Contributions

1.1.1. Security Model for Side-Channel Attacks

In this paper, we propose a security model where the adversary attempts to obtain information about the hidden source by collecting data from (1) the public communication channel in the form of ciphertexts, and (2) the side-channel in the form of some physical data related to the encryption keys. Our proposed security model is illustrated in Figure 1.

Figure 1.

Figure 1

Illustration of side-channel attacks.

Based on the security model illustrated above, we formulate a security problem of strengthening the security of Shannon cipher system where the encryption is implemented on a physical encryption device and the adversary attempts to obtain some information on the hidden source by collecting ciphertexts and performing side-channel attacks.

We describe our security model in a more formal way as follows. The source X is encrypted using an encryption device with secret key K installed. The result of the encryption, i.e., ciphertext C, is sent through a public communication channel to a data center where C is decrypted back into the source X using the same key K. The adversary A is allowed to obtain C from the public communication channel and is also equipped with an encoding device φA that encodes and processes the noisy large alphabet data Z, i.e., the measurement result of the physical information obtained from the side-channel, into the appropriate binary data MA. It should be noted that in our model, we do not put any limitation on the kind of physical information measured by the adversary. Hence, any theoretical result based on this model automatically applies to any kind of side-channel attack, including timing analysis, power analysis, and electromagnetic (EM) analysis. In addition, the measurement device may just be a simple analog-to-digital converter that converts the analog data representing physical information leaked from the device into “noisy” digital data Z. In our model, we represent the measurement process as a communication channel W.

1.1.2. Main Result

As the main theoretical result, we show that we can strengthen the secrecy/security of the Shannon cipher implemented on a physical device against an adversary who collects the ciphertexts and launches side-channel attacks by a simple method of compressing the ciphertext C from a Shannon cipher using an affine encoder φ into C˜ before releasing it into the public communication channel.

We prove that in the case of one-time pad encryption, we can strengthen the secrecy/security of the cipher system by using an appropriate affine encoder. More precisely, we prove that for any distribution of the secret key K and any measurement device (used to convert the physical information from a side-channel into the noisy large alphabet data Z), we can derive an achievable rate region for (RA,R) such that if we compress the ciphertext C into C˜ using the affine encoder φ, which has an encoding rate R inside the achievable region, then we can achieve reliability and security in the following sense:

  • anyone with secret key K can construct an appropriate decoder that decrypts and encodes C˜ with exponentially decaying error probability, but

  • the amount of information gained by any adversary A who obtains the compressed ciphertext C˜ and encoded physical information MA is exponentially decaying to zero as long as the encoding device φA encodes the side physical information into MA with a rate RA within the achievable rate region.

By utilizing the homomorphic property of one-time-pad and affine encoding, we are able to separate the theoretical analysis of reliability and security such that we can deal with each issue independently. For reliability, we mainly obtain our result by using the result of Csizár [6] on the universal coding for a single source using linear codes. For the security analysis, we derive our result by adapting the framework of the one helper source coding problem posed and investigated by Ahlswede, Körner [7] and Wyner [8]. Specifically, in order to derive the secrecy exponent, we utilize the exponential strong converse theorem of Oohama [9] for the one helper source coding problem. In [10], Watanabe and Oohama deal with a similar source coding problem, but their result is insufficient for deriving the lower bound of the secrecy exponent. We will explain the relation between our method and previous related works in more detail in Section 4.

1.2. Comparison to Existing Models of Side-Channel Attacks

The most important feature of our model is that we do not make any assumption about the type or characteristics of the physical information that is measured by the adversary. Several theoretical models analyzing the security of a cryptographic system against side-channel attacks have been proposed in the literature. However, most of the existing works are applicable only for specific characteristics of the leaked physical information. For example, Brier et al. [1] and Coron et al. [11] propose a statistical model for side-channel attacks using the information from power consumption and the running time, whereas Agrawal et al. [5] propose a statistical model for side-channel attacks using electromagnetic (EM) radiations. A more general model for side-channel attacks is proposed by Köpf et al. [12] and Backes et al. [13], but they are heavily dependent upon implementation on certain specific devices. Micali et al. [14] propose a very general security model to capture the side-channel attacks, but they fail to offer any hint of how to build a concrete countermeasure against the side-channel attacks. The closest existing model to ours is the general framework for analyzing side-channel attacks proposed by Standaert et al. [15]. The authors of [15] propose a countermeasure against side-channel attacks that is different from ours, i.e., noise insertion on implementation. It should be noted that the noise insertion countermeasure proposed by [15] is dependent on the characteristics of the leaked physical information. On the other hand, our countermeasure, i.e., compression using an affine encoder, is independent of the characteristics of the leaked physical information.

1.3. Comparison to Encoding before Encryption

In this paper, our proposed solution is to perform additional encoding in the form of compression after the encryption process. Our aim is that by compressing the ciphertext, we compress the key “indirectly” and increase the “flatness” of the key used in the compressed ciphertext (C˜) such that the adversary will not get much additional information from eavesdropping on the compressed ciphertext (C˜). Instead of performing the encoding after encryption, one may consider performing the encoding before encryption, i.e., encoding the source and the key “directly” before performing the encryption. However, since we need to apply two separate encodings on the source and the key, we can expect that the implementation cost is more expensive than our proposed solution, i.e., approximately double the cost of applying our proposed solution. Moreover, it is not completely clear whether our security analysis still applies for this case. For example, if the adversary performs the side-channel attacks on the key after it is encoded (before encryption), we need a complete remodeling of the security problem.

1.4. Organization of this Paper

This paper is structured as follows. In Section 2, we show the basic notations and definitions that we use throughout this paper, and we also describe the formal formulations of our model and the security problem. In Section 3, we explain the idea and the formulation of our proposed solution. In Section 4, we explain the relation between our formulation and previous related works. Based on this, we explain the theoretical challenge which we have to overcome to prove that our proposed solution is sound. In Section 5, we state our main theorem on the reliability and security of our solution. In Section 6, we show the proof of our main theorem. We put the proofs of other related propositions, lemmas, and theorems in the appendix.

2. Problem Formulation

In this section, we will introduce the general notations used throughout this paper and provide a description of the basic problem we are focusing on, i.e., side-channel attacks on Shannon cipher systems. We also explain the basic framework of the solution that we consider to solve the problem. Finally, we state the formulation of the reliability and security problem that we consider and aim to solve in this paper.

2.1. Preliminaries

In this subsection, we show the basic notations and related consensus used in this paper.

Random Source of Information and Key: Let X be a random variable from a finite set X. Let {Xt}t=1 be a stationary discrete memoryless source (DMS) such that for each t=1,2,, Xt takes values in the finite set X and obeys the same distribution as that of X denoted by pX={pX(x)}xX. The stationary DMS {Xt}t=1 is specified with pX. In addition, let K be a random variable taken from the same finite set X and representing the key used for encryption. Similarly, let {Kt}t=1 be a stationary discrete memoryless source such that for each t=1,2,, Kt takes values in the finite set X and obeys the same distribution as that of K denoted by pK={pK(k)}kX. The stationary DMS {Kt}t=1 is specified with pK. In this paper, we assume that pK is the uniform distribution over X.

Random Variables and Sequences: We write the sequence of random variables with length n from the information source as follows: Xn:=X1X2Xn. Similarly, strings with length n of Xn are written as xn:=x1x2xnXn. For xnXn, pXn(xn) stands for the probability of the occurrence of xn. When the information source is memoryless, specified with pX, the following equation holds:

pXn(xn)=t=1npX(xt).

In this case, we write pXn(xn) as pXn(xn). Similar notations are used for other random variables and sequences.

Consensus and Notations: Without loss of generality, throughout this paper, we assume that X is a finite field. The notation ⊕ is used to denote the field addition operation, while the notation ⊖ is used to denote the field subtraction operation, i.e., ab=a(b), for any elements a,bX. Throughout this paper, all logarithms are taken to the natural basis.

2.2. Basic System Description

In this subsection, we explain the basic system setting and the basic adversarial model we consider in this paper. First, let the information source and the key be generated independently by different parties Sgen and Kgen, respectively. In our setting, we assume the following:

  • The random key Kn is generated by Kgen from a uniform distribution.

  • The source is generated by Sgen and is independent of the key.

Next, let the random source Xn from Sgen be sent to the node L, and let the random key Kn from Kgen also be sent to L. Further settings of our system are described as follows and are also shown in Figure 2.

  1. Source Processing: At the node L, Xn is encrypted with the key Kn using the encryption function Enc. The ciphertext Cn of Xn is given by
    Cn:=Enc(Xn)=XnKn.
  2. Transmission: Next, the ciphertext Cn is sent to the information processing center D through a public communication channel. Meanwhile, the key Kn is sent to D through a private communication channel.

  3. Sink Node Processing: In D, we decrypt the ciphertext Cn using the key Kn through the corresponding decryption procedure Dec defined by Dec(Cn)=CnKn. It is obvious that we can correctly reproduce the source output Xn from Cn and Kn with the decryption function Dec.

Figure 2.

Figure 2

Main problem: side-channel attacks on a Shannon cipher system.

Side-Channel Attacks by Eavesdropper Adversary: An (eavesdropper) adversary A eavesdrops on the public communication channel in the system. The adversary A also uses side information obtained by side-channel attacks. In this paper, we introduce a new theoretical model of side-channel attacks that is described as follows. Let Z be a finite set and let W:XZ be a noisy channel. Let Z be a channel output from W for the random input variable K. We consider the discrete memoryless channel specified with W. Let ZnZn be a random variable obtained as the channel output by connecting KnXn to the input channel. We write a conditional distribution on Zn given Kn as

Wn=Wn(zn|kn)(kn,zn)Kn×Zn.

Since the channel is memoryless, we have

Wn(zn|kn)=t=1nW(zt|kt). (1)

On the above output Zn of Wn for the input Kn, we assume the following:

  • The three random variables X, K, and Z satisfy X(K,Z), which implies that Xn(Kn,Zn).

  • W is given in the system and the adversary A cannot control W.

  • Through side-channel attacks, the adversary A can access Zn.

We next formulate the side information the adversary A obtains by side-channel attacks. For each n=1,2,, let φA(n):ZnMA(n) be an encoder function. Set φA:={φA(n)}n=1,2,. Let

RA(n):=1nlog||φA||=1nlog|MA(n)|

be a rate of the encoder function φA(n). For RA>0, we set

FA(n)(RA):={φA(n):RA(n)RA}.

For the encoded side information the adversary A obtains, we assume the following.

  • The adversary A, having accessed Zn, obtains the encoded additional information φA(n)(Zn). For each n=1,2,, the adversary A can design φA(n).

  • The sequence {RA(n)}n=1 must be upper-bounded by a prescribed value. In other words, the adversary A must use φA(n) such that for some RA and for any sufficiently large n, φA(n)FA(n)(RA).

On the Scope of Our Theoretical Model: When the |Z| is not so large, the adversary A may directly access Zn. In contrast, in a real situation of side-channel attacks, often the noisy version Zn of Kn can be regarded as very close to an analog random signal. In this case, |Z| is sufficiently large and the adversary A cannot obtain Zn in a lossless form. Our theoretical model can address such situations of side-channel attacks.

2.3. Solution Framework

As the basic solution framework, we consider applying a post-encryption-compression coding system. The application of this system is illustrated in Figure 3.

  1. Encoding at Source nodeL: We first use φ(n) to encode the ciphertext Cn=XnKn. The formal definition of φ(n) is φi(n):XnXm. Let C˜m=φ(n)(Cn). Instead of sending Cn, we send C˜m to the public communication channel.

  2. Decoding at Sink NodesD: D receives C˜m from the public communication channel. Using the common key Kn and the decoder function Ψ(n):Xm×XnXn, D outputs an estimation X^n=Ψ(n)(C˜m,Kn) of Xn.

Figure 3.

Figure 3

Basic solution framework: post-encryption-compression coding system.

On Reliability and Security: From the description of our system in the previous section, the decoding process in our system above is successful if X^n=Xn holds. Combining this and (6), it is clear that the decoding error probabilities pe are as follows:

pe=pe(φ(n),Ψ(n)|pXn):=Pr[Ψ(n)(φ(n)(Xn))Xn].

Set MA(n)=φA(n)(Zn). The information leakage Δ(n) on Xn from (C˜m,MA(n)) is measured by the mutual information between Xn and (C˜m,MA(n)). This quantity is formally defined by

Δ(n)=Δ(n)(φ(n),φA(n)|pXn,pKn,Wn):=I(Xn;C˜m,MA(n)).

Reliable and Secure Framework:

Definition 1.

A quantity R is achievable underRA>0for the systemSysif there exists a sequence{(φ(n),Ψ(n))}n1such thatϵ>0, n0=n0(ϵ)N0, nn0, we have

1nlog|Xm|=mnlog|X|R,pe(φ(n),Ψ(n)|pXn)ϵ,

and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA),

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn)ϵ.

Definition 2.

[Reliable and Secure Rate Region] LetRSys(pX,pK,W)denote the set of all(RA,R)such that R is achievable underRA. We callRSys(pX,pK,W)the reliable and secure rate region.

Definition 3.

A triple(R,E,F)is achievable underRA>0for the systemSysif there exists a sequence{(φ(n),ψ(n))}n1such thatϵ>0, n0=n0(ϵ)N0, nn0, we have

1nlog|Xm|=mnlog|X|R,pe(ϕ(n),ψ(n)|pXn)en(Eϵ),

and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA), we have

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn)en(Fϵ).

Definition 4 (Rate, Reliability, and Security Region).

LetDSys(pX,pK,W)denote the set of all(RA,R,E,F)such that(R,E,F)is achievable underRA. We callDSys(pX,pK,W)the rate, reliability and security region.

Our aim in this paper is to find the explicit inner bounds of RSys(pX, pK,W) and DSys(pX, pK,W).

3. Proposed Idea: Affine Encoder as a Privacy Amplifier

In order to instantiate the basic solution framework mentioned in previous section, we propose the use of an affine encoder as the compression function φ(n). We show in this section that we can easily construct an affine encoder that is suitable for our solution framework based on a linear encoder. The instantiation of the solution framework with an affine encoder is illustrated in Figure 4.

Figure 4.

Figure 4

Our proposed solution: affine encoders as privacy amplifiers.

Construction of the Affine Encoder: For each n=1,2,, let ϕ(n):XnXm be a linear mapping. We define the mapping ϕ(n) by

ϕ(n)(xn)=xnA for xnXn, (2)

where A is a matrix with n rows and m columns. Entries of A are from X. We fix bmXm. Define the mapping φ(n):XnXm by

φ(n)(kn):=ϕ(n)(kn)bm=knAbm, for knXn. (3)

The mapping φ(n) is called the affine mapping induced by the linear mapping ϕ(n) and constant vector bmXm. By the definition of φ(n) shown in (3), the following affine structure holds:

φ(n)(xnkn)=(xnkn)Abm=xnA(knAbm)=ϕ(n)(xn)φ(n)(kn), for xn,knXn. (4)

Next, let ψ(n) be the corresponding decoder for ϕ(n) such that ψ(n):XmXn. Note that ψ(n) does not have a linear structure in general.

Description of Proposed Procedure: We describe the procedure of our privacy amplified system as follows.

  1. Encoding of Ciphertext: First, we use φ(n) to encode the ciphertext Cn=XnKn. Let C˜m=φ(n)(Cn). Then, instead of sending Cn, we send C˜m to the public communication channel. By the affine structure of the encoder φ(n) (shown in (4)) we have
    C˜m=φ(n)(XnKn)=ϕ(n)(Xn)φ(n)(Kn)=X˜mK˜m, (5)
    where we set X˜m:=ϕ(n)(Xn),K˜m:=φ(n)(Kn).
  2. Decoding at Sink NodeD: First, using the linear encoder φ(n), D encodes the key Kn received through a private channel into K˜m=(φ(n)(Kn). Receiving C˜m from the public communication channel, D computes X˜m in the following way. From (5), we have that the decoder D can obtain X˜m =ϕ(n)(Xn) by subtracting K˜m=φ(n)(Kn) from C˜m. Finally, D outputs X^n by applying the decoder ψ(n) to X˜m as follows:
    X^n=ψ(n)(X˜m)=ψ(n)(ϕ(n)(Xn)). (6)

Our concrete privacy-amplified system described above is illustrated in Figure 4.

Splitting of Reliability and Security

By the affine structure of the encoder function φ(n), the proposed privacy amplified system can be split into two coding problems. One is a source coding problem using a linear encoder ϕ(n). We hereafter call this Problem 0. The other is a privacy amplification problem using the affine encoder φ(n). We call this Problem 1. These two problems are shown in Figure 5.

Figure 5.

Figure 5

Two split problems: Problem 0 (Reliability) and Problem 1 (Security).

On Reliability (Problem 0): From the description of our system in the previous section, the decoding process in our system above is successful if X^n=Xn holds. Combining this and (6), it is clear that the decoding error probability pe is as follows:

pe=pe(φ(n),ψ(n)|pXn)=Pr[ψ(n)(ϕ(n)(Xn))Xn].

In Problem 0, we discuss the minimum rate R such that {(ϕ(n),ψ(n))}n1 such that ϵ>0, n0=n0(ϵ)N0, nn0, we have

1nlog|Xm|=mnlog|X|R+ε,pe(ϕ(n),ψ(n)|pXn)ϵ.

It is well known that this minimum is equal to H(X) when {ϕ(n)}n is a sequence of general (nonlinear) encoders. Csiszár [6] proved the existence of a sequence of linear encoders and nonlinear decoders {(ϕ(n),ψ(n))}n1 such that for any pX satisfying R>H(X), the error probability pe(ϕ(n),ψ(n)|pXn) decays exponentially as n. His result is stated in the next section.

On Security (Problem 1): We assume that the adversary A knows (A,bn) defining the affine encoder φ(n). When φ(n) has the affine structure shown in (4), the information leakage Δ(n) measured by the mutual information between Xn and (C˜m, MA(n)) has the following form:

Δ(n)=Δ(n)(φ(n),φA(n)|pXn,pKn,Wn)=I(Xn;C˜m,MA(n))=I(Xn;φ(n)(XnKn),MA(n)),=(a)I(Xn;φ(n)(Xn)ϕ(n)(Kn),MA(n))=I(Xn;X˜mK˜m|MA(n)). (7)

Step (a) follows from X1nMA(n). Using (7), we upper bound Δ(n)=I(Xn;C˜m,MA(n)) to obtain the following lemma.

Lemma 1.

Δ(n)=I(Xn;C˜m,MA(n))DpK˜m|MA(n)pVmpMA(n), (8)

wherepVmrepresents the uniform distribution overXm.

Proof. 

We have the following chain of inequalities:

Δ(n)=I(Xn;C˜m,MA(n))=(a)I(X1n;X˜m+K˜m|MA(n))log|Xm|H(X˜m+K˜m|Xn,MA(n))=(b)log|Xm|H(K˜m|Xn,MA(n))=(c)log|Xm|H(K˜m|MA(n))=DpK˜m|MA(n)pVmpMA(n).

Step (a) follows from (7). Step (b) follows from X˜m=ϕ(n)(Xn). Step (c) follows from (K˜m,MA(n))X1n. □

We set

ξD(n)=ξD(n)(φ(n),RA|pKn,Wn):=maxφA(n)F(n)(RA)DpK˜m|MA(n)pVmpMA(n).

Then we have the following lemma.

Lemma 2.

For any affine encoderφ(n):XnXm, we have

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn)ξD(n)(φ(n),RA|pKn,Wn).

The quantity ξD(n)(φ(n),RA|pKn,Wn) will play an important role in deriving an explicit upper bound of Δ(n)(φ(n),φA(n)|pXn,pKn,Wn). In Problem 1, we consider the privacy amplification problem using the quantity ξD(n)(φ(n),RA|pKn,Wn) as a security criterion. In this problem, we study an explicit characterization of the region denoted by RP1(pK,W), which consists of all pairs (R,RA) such that {φ(n)}n1 such that ε>0,n0=n0(ε)N0,nn0,

1nlog||φ(n)||=mnlog|X|RεandξD(n)(φ(n),RA|pKn,Wn)ε.

In the next section, we discuss two previous works related to Problem 1.

4. Previous Related Works

In this section, we introduce approaches from previous existing work related to Problem 0 (reliability) and Problem 1 (security). Our goal is that by showing these previous approaches, it will be easier to understand our approach to analyzing reliability and security. In particular, for Problem 1 (security), we explain approaches used in similar problems in previous works and highlight their differences from Problem 1.

We first state a previous result related to Problem 0. Let φ(n) be an affine encoder and ϕ(n) be a linear encoder induced by φ(n). We define a function related to an exponential upper bound of pe(ϕ(n),ψ(n)|pXn). Let X¯ be an arbitrary random variable over X that has a probability distribution pX¯. Let P(X) denote the set of all probability distributions on X. For R0 and pXP(X), we define the following function:

E(R|pX):=minpX¯P(X){[RH(X¯)]++D(pX¯||pX)}.

By simple computation, we can prove that E(R|pX) takes positive values if and only if R>H(X). We have the following result.

Theorem 1. 

(Csiszár [6]). There exists a sequence {(ϕ(n),ψ(n)}n1 such that for any pX, we have

1nlog|Xm|=mnlog|X|R,pe(ϕ(n),ψ(n)|pXn)en[E(R|pX)δn], (9)

where δn is defined by

δn:=1nloge(n+1)3|X|.

Note that δn0 as n.

It follows from Theorem 1 that if R>H(X), then the error probability of decoding pe(ϕ(n),ψ(n)|pXn) decays exponentially, and its exponent is lower bounded by the quantity E(R|pX). Furthermore, the code {(ϕ(n),ψ(n))}n1 is a universal code that depends only on the rate R and not on the value of pXP(X).

We next state two coding problems related to Problem 1. One is a problem on the privacy amplification for the bounded storage eavesdropper posed and investigated by Watanabe and Oohama [10]. The other is the one helper source coding problem posed and investigated by Ashlswede and Körner [7] and Wyner [16]. We hereafter call the former and latter problems, respectively, Problem 2 and Problem 3. Problems 1–3 are shown in Figure 6. As we can see from this figure, these three problems are based on the same communication scheme. The classes of encoder functions and the security criteria on A are different between these three problems. In Problem 1, the sequence of encoding functions {φ(n)}n1 is restricted to the class of affine encoders to satisfy the homomorphic property. On the other hand, in Problems 2 and 3, we have no such restriction on the class of encoder functions. In descriptions of Problems 2 and 3, we state the difference in security criteria between Problems 1, 2, and 3. A comparison of three problems in terms of {φ(n)}n1 and security criteria is summarized in Table 1.

Figure 6.

Figure 6

Three related coding problems.

Table 1.

Differences between Problems 1, 2, and 3 in terms of {φ(n)}n1 and security criteria.

Problem 1 Problem 2 Problem 3
φ(n) affine encoders general general
Security Criteria D(pK˜m|MA(n)||pVm|pMA(n)) d(pVm×pMA(n),pK˜mMA(n)) pc,A(n)φ(n),φA(n),ψA(n)|pKn,Wn

In Problem 2, Alice and Bob share a random variable Kn of block length n, and an eavesdropper adversary A has a random variable Zn that is correlated to Kn. In such a situation, Alice and Bob try to distill a secret key as long as possible. In [10], they considered a situation such that the adversary’s random variable Zn is stored in a storage that is obtained as a function value of Zn, and the rate of the storage size is bounded. This situation makes sense when the alphabet size of the adversary’s observation Zn is too huge to be stored directly in a storage. In such a situation, Watanabe and Oohama [10] obtained an explicit characterization of the region RWO(pK,W) indicating the trade-off between the key rate R=(m/n)log|X| and the rate RA=(1/n)log|MA(n) of the storage size. In Problem 2, the variational distance d(pVm×pMA(n),pK˜mMA(n)) between pVm×pMA(n) and pK˜mMA(n)) is used as a security criterion instead of D(pK˜m|MA(n)||pVm|pMA(n)) in Problem 1. Define

ξd(n)=ξd(n)(φ(n),RA|pKn,Wn):=maxφA(n)F(n)(RA)d(pVm×pMA(n),pK˜mMA(n)).

Then the formal definition of the region RWO(pK,W) is given by the following:

RWO(pK,W):={(RA,R):{φ(n)}n1 such that ε>0,n0=n0(ε)N0,nn0,(m/n)log|X|Rε and ξd(n)(φ(n),RA|pKn,Wn)ε}.

In Problem 3, the adversary outputs an estimation K^n of Kn from K˜m=φ(n)(Kn) and MA(n)=φA(n)(Zn). Let ψA(n):M(n)×Xm be a decoder function of the adversary. Then K^n is given by K^n=ψA(n)(φA(n)(Zn),K˜m=φ(n)(Kn). Let

pe,A(n)=pe,A(n)φ(n),φA(n)ψA(n)|pKn,Wn:=PrKnψA(n)(φA(n)(Zn),φ(n)(Kn))

be the error probability of decoding for Problem 3. The quantity MA(n) serves as a helper for the decoding of Kn from K˜m. In Problem 3, Ahlswede and Körner [7] and Wyner [16] investigated an explicit characterization of the rate region RAKW(pK,W) indicating the trade-off between RA and R under the condition that pe,A(n)=Pr{KnK^n} vanishes asymptotically. The region RAKW(pK,W) is formally defined by

RAKW(pK,W):={(RA,R):{(φ(n),φA(n),ψA(n)}n1 such thatε>0,n0=n0(ε)N0,nn0,(m/n)log|X|R+ε,φA(n)FA(R+ε),and pe,A(n)φ(n),φA(n),ψA(n)|pKn,Wnε}.

The region RAKW(pK,W) was determined by Ashlswede and Körner [7] and Wyner [16]. To state their result, we define several quantities. Let U be an auxiliary random variable taking values in a finite set U. We assume that the joint distribution of (U,Z,K) is

pUZK(u,z,k)=pU(u)pZ|U(z|u)pK|Z(k|z).

The above condition is equivalent to UZK. Define the set of probability distribution p=pUZK by

P(pK,W):={pUZK:|U||Z|+1,UZK}.

Set

R(p):={(RA,R):RA,R0,RAI(Z;U),RH(K|U)},R(pK,W):=pP(pK,W)R(p).

We can show that the region R(pK,W) satisfies the following property.

Propert1.

  • (a) 

    The regionR(pK,W)is a closed convex subset ofR+2:={RA0,R0}.

  • (b) 
    For any(pK,W), we have
    min(RA,R)R(pK,W)(RA+R)=H(K). (10)
    The minimum is attained by(RA,R)=(0,H(K)). This result implies that
    R(pK,W){(RA,R):RA+RH(K)}R+2.

    Furthermore, the point(0,H(K))always belongs toR(pK,W).

Property 1 part (a) is a well-known property. Proof of Property 1 part (b) is easy. Proofs of Property 1 parts (a) and (b) are omitted. Typical shape of the region R(pK,W) is shown in Figure 7.

Figure 7.

Figure 7

Shape of the region R(pK,W).

The rate region RAKW(pK,W) was determined by Ahlswede and Körner [7] and Wyner [16]. Their result is the following.

Theorem 2.

(Ahlswede, Körner [7] andWyner [16])

RAKW(pK,W)=R(pK,W).

Watanabe and Oohama [10] investigated an explicit form of RWO(pK,W) to show that it is equal to Rc(pK,W), that is, we have the following result.

Theorem 3.

(Watanabe and Oohama [10])

RWO(pK,W)=RAKWc(pK,W)=Rc(pK,W).

In the remaining part of this section, we investigate a relationship between Problems 2 and 3 to give an outline of the proof of this theorem. Let

pc,A(n)=pc,A(n)φ(n),φA(n),ψA(n)|pKn,Wn:=PrKn=ψA(n)(φA(n)(Zn),φ(n)(Kn))

be the correct probability of decoding for Problem 3. The following lemma provides an important inequality to examine a relationship between these two problems.

Lemma 3.

For any(φ(n),φA(n),ψA(n)), we have the following:

pc,A(n)φ(n),φA(n),ψA(n)|pKn,Wn1|X|m+dpVm×pMA(n),pK˜mMA(n).

Proof of this lemma is given in Appendix A. Using Lemma 3, we can easily prove the inclusion RWO(pK,W)RAKW(pK,W), which corresponds to the converse part of Theorem 3.

Proof ofRWO(pK,W)RAKWc(pK,W):

We assume that (RA,R)RAKW(pK,W). Then there exists {(φ(n),φA(n),ψA(n)}n1 such that ε>0, n0=n0(ε)N0, nn0,

mnlog|X|R+ε,φA(n)FA(n)(R+ε), (11)
and pe,A(n)φ(n),φA(n),ψA(n)|pKn,Wnε. (12)

From the above sequence {(φ(n),φA(n),ψA(n))}n1, we can construct the sequence {(φ^(n),φA(n),ψA(n)}n1 such that

R+ε1nlog||φ^(n)||=m^nlog|X|maxRε,mnlog|X|,φA(n)FA(n)(R+ε), (13)
pe,A(n)φ^(n),φA(n),ψA(n)|pKn,Wnpe,A(n)φ(n),φA(n),ψA(n)|pKn,Wnε. (14)

Set K˜m^:=φ^(n)(Kn). Then from (14) and Lemma 3, we have

dpVm^×pMA(n),pK˜m^MA(n)1ε1|X|m^,

from which we have

dpVm^×pMA(n),pK˜m^MA(n)12ε, (15)

for sufficiently large n. From (13), (15), and the definition of RWO(pK,W), we can see that (RA+ε,R)RWO(pK,W), or equivalent to

(RA+ε,R)RWOc(pK,W)(RA,R)RWOc(pK,W)ε(1,0), (16)

where we set R(a,b):={(u,v):(u+a,v+b)R}. Since (RA,R)RAKW(pK,W) is arbitrary, we have that

RAKW(pK,W)RWOc(pK,W)ε(1,0)RAKW(pK,W)+ε(1,0)RWOc(pK,W)RWO(pK,W)RAKWc(pK,W)+ε(1,0)RWO(pK,W)Rc(pK,W)+ε(1,0). (17)

By letting ε0 in (17) and considering that Rc(pK,W) is an open set, we have that RWO(pK,W)Rc(pK,W). □

To prove RWO(pK,W)RAKWc, we examine an upper bound of ξd(n)(φ(n),RA|pKn,Wn). For η>0, we define

η(n)=η(n)(R|pKn,Wn):=pMA(n)ZnKnR1nlog1pKn|MA(n)(Kn|MA(n))η,Φd,η(n)(RA,R|pKnWn):=maxφA(n)F(n)(RA)η(n)(R|pKn,Wn)+enη.

According to Watanabe and Oohama [10], we have the following two propositions.

Proposition 1.

(Watanabe and Oohama [10]). Fix any positive η>0. φ(n):XnXm satisfying (m/n)log|X|R2η, we have

ξd(n)(φ(n),RA|pKn,Wn)Φd,η(n)(RA,R|pKnWn).

Proposition 2.

(Watanabe and Oohama [10]). If (RA,R)R(pK,W), then for any η>0 and any φA(n)FA(n)(RA), we have

limnη(n)(R|pKn,W)=0,

which implies that

limnΦd,η(n)(RA,R|pKnWn)=0.

The inclusion RWO(pK,W)RAKWc immediately follows from Propositions 1 and 2.

5. Reliability and Security Analysis

In this section, we state our main results. We use the affine encoder φ(n) defined in the previous section. We upper bound pe=pe(φ(n),ψ(n)|pXn) and Δ(n)=Δ(n)(φ(n),φA(n)|pXn,pKn,Wn) to obtain inner bounds of RSys(pX,pK,W) and DSys(pX,pK,W).

Let

ΦD,η(n)(RA,R|pKnWn):=maxφA(n)F(n)(RA)nRη(n)(R|pKn,Wn)+enη,ΦD(n)(RA,R|pKnWn):=infη>0ΦD,η(RA,R|pKnWn).

Then we have the following proposition.

Proposition 3.

For anyRA,R>0and any(pK,W), there exists a sequence of mappings{(φ(n),ψ(n))}n=1such that for anypXP(X), we have

R1n1nlog|Xm|=mnlog|X|R,pe(ϕ(n),ψ(n)|pXn)e(n+1)2|X|{(n+1)|X|+1}enE(R|pX), (18)

and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA), we have

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn){(n+1)|X|+1}ΦD(n)(RA,R|pKnWn). (19)

This proposition can be proved by several tools developed by previous works. The detail of the proof is given in the next section. As we stated in Proposition 2, Watanabe and Oohama [10] proved that if (RA,R)R(pK,W), then the quantity for any η>0 and any φA(n)FA(n)(RA), the quantity η(n)(R|pKn,W). Their method can not be applied to the analysis of ΦD(n)(RA,R|pKnWn) since the quantity nR is multiplied with the quantity η(n)(R|pKn,W) in the definition of ΦD(n)(RA,R|pKnWn). In this paper, we derive an upper bound of ΦD(n)(RA,R|pKnWn) that decays exponentially as n if (RA,R)R(pK,W). To derive the upper bound, we use a new method that is developed by Oohama to prove strong converse theorems in multi-terminal source or channel networks [9,17,18,19,20].

We define several functions and sets to describe the upper bound of ΦD(n)(RA,R|pKnWn). Set

Q(pK|Z):={q=qUZK:|U||Z|,UZK,pK|Z=qK|Z}.

For (μ,α)[0,1]2 and for q=qUZKQ(pK|Z), define

ωq|pZ(μ,α)(z,k|u):=α¯logqZ(z)pZ(z)+αμlogqZ|U(z|u)pZ(z)+μ¯log1qK|U(k|u),Ω(μ,α)(q|pZ):=logEqexpωq|pZ(μ,α)(Z,K|U),Ω(μ,α)(pK,W):=minqQ(pK|Z)Ω(μ,α)(q|pZ),F(μ,α)(μRA+μ¯R|pK,W):=Ω(μ,α)(pK,W)α(μRA+μ¯R)2+αμ¯,F(RA,R|pK,W):=sup(μ,α)[0,1]2F(μ,α)(μRA+μ¯R|pK,W).

We next define a function serving as a lower bound of F(RA,R|pK,W). For each pUZKPsh(pK,W), define

ω˜p(μ)(z,k|u):=μlogpZ|U(z|u)pZ(z)+μ¯log1pK|U(K|U),Ω˜(μ,λ)(p):=logEpexpλω˜p(μ)(Z,K|U),Ω˜(μ,λ)(pK,W):=minpPsh(pK,W)Ω˜(μ,λ)(p).

Furthermore, set

F˜(μ,λ)(μRA+μ¯R|pK,W):=Ω˜(μ,λ)(pK,W)λ(μRA+R)2+λ(5μ),F˜(RA,R|pK,W):=supλ0,μ[0,1]F˜(μ,λ)(μRA+μ¯R|pK,W).

We can show that the above functions satisfy the following property.

Property 2.

  • (a) 

    The cardinality bound|U||Z|inQ(pK|Z)is sufficient to describe the quantityΩ(μ,β,α)(pK,W). Furthermore, the cardinality bound|U||Z|inPsh(pK,W)is sufficient to describe the quantityΩ˜(μ,λ)(pK,W).

  • (b) 
    For anyRA,R0, we have
    F(RA,R|pK,W)F˜(RA,R|pK,W).
  • (c) 
    For anyp=pUZKPsh(pZ,W)and any(μ,λ)[0,1]2, we have
    0Ω˜(μ,λ)(p)μlog|Z|+μ¯log|K|. (20)
  • (d) 
    Fix anyp=pUZKPsh(pK,W)andμ[0,1]. Forλ[0,1], we define a probability distributionp(λ)=pUZK(λ)by
    p(λ)(u,z,k):=p(u,z,k)expλω˜p(μ)(z,k|u)Epexpλω˜p(μ)(Z,K|U).
    Then forλ[0,1/2], Ω˜(μ,λ)(p)is twice differentiable. Furthermore, forλ[0,1/2], we have
    ddλΩ˜(μ,λ)(p)=Ep(λ)ω˜p(μ)(Z,K|U),d2dλ2Ω˜(μ,λ)(p)=Varp(λ)ω˜p(μ)(Z,K|U).

    The second equality implies thatΩ˜(μ,λ)(p|pK,W)is a concave function ofλ0.

  • (e) 
    For(μ,λ)[0,1]×[0,1/2], define
    ρ(μ,λ)(pK,W):=max(ν,p)[0,λ]×Psh(pK,W):Ω˜(μ,λ)(p)=Ω˜(μ,λ)(pK,W)Varp(ν)ω˜p(μ)(Z,K|U),
    and set
    ρ=ρ(pK,W):=max(μ,λ)[0,1]×[0,1/2]ρ(μ,λ)(pK,W).
    Then we haveρ(pK,W)<. Furthermore, for any(μ,λ)[0,1]×[0,1/2], we have
    Ω˜(μ,λ)(pK,W)λR(μ)(pK,W)λ22ρ(pK,W).
  • (f) 
    For everyτ(0,(1/2)ρ(pK,W)), the condition(RA,R+τ)R(pK,W)implies
    F˜(RA,R|pK,W)>ρ(pK,W)4·g2τρ(pK,W)>0,
    where g is the inverse function ofϑ(a):=a+(5/4)a2,a0.

Proof of this property is found in Oohama [9] (extended version). On the upper bound of ΦD(n)(RA,R|pKnWn), we have the following:

Proposition 4.

For anyn1/R, we have

ΦD(n)(RA,R|pKnWn)5nRenF(RA,R|pK,W). (21)

Proof of this proposition is given in the next section. Proposition 4 has a close connection with the one helper source coding problem, which is explained as Problem 3 in the previous section. In fact, for the proof we use the result Oohama [9] obtained for an explicit lower bound of the optimal exponent on the exponential decay of pc,A(n)φ(n),φA(n),ψA(n)|pKn,Wn for (RA,R)RAKW(pK,W). By Propositions 3 and 4, we obtain our main result shown below.

Theorem 4.

For any RA,R>0 and any (pK,W), there exists a sequence of mappings {(φ(n),ψ(n))}n=1 such that for any pXP(X), we have

1nR1nlog|Xm|=mnlog|X|R,pe(ϕ(n),ψ(n)|pXn)en[E(R|pX)δ1,n] (22)

and for any eavesdropper A with φA satisfying φA(n)FA(n)(RA), we have

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn)en[F(RA,R|pK,W)δ2,n], (23)

where δi,n,i=1,2 are defined by

δ1,n:=1nloge(n+1)2|X|{(n+1)|X|+1},δ2,n:=1nlog5nR{(n+1)|X|+1}.

Note that for i=1,2, δi,n0 as n.

The functions E(R|pX) and F(RA,R|pK,W) take positive values if and only if (RA,R) belongs to the set

{R>H(X)}Rc(pK,W):=RSys(in)(pX,pK,W).

Thus, by Theorem 4, under (RA,R)RSys(in)(pX,pK,W), we have the following:

  • In terms of reliability, pe(ϕ(n),ψ(n)|pXn) goes to zero exponentially as n tends to infinity, and its exponent is lower bounded by the function E(R|pX).

  • In terms of security, for any φA satisfying φA(n)FA(n)(RA), the information leakage Δ(n)(φ(n),φA(n)|pXn,pKn,Wn) on Xn goes to zero exponentially as n tends to infinity, and its exponent is lower bounded by the function F(RA,R|pK,W).

  • The code that attains the exponent functions E(R|pX) is the universal code that depends only on R and not on the value of the distribution pX.

Define

DSys(in)(pX,pK,W):={(R1,R2,E(R|pX),F(RA,R|pK)):(R1,R2)RSys(in)(pX,pK,W)}.

From Theorem 4, we immediately obtain the following corollary.

Corollary 1.

RSys(in)(pX,pK,W)RSys(pX,pK,W),DSys(in)(pX,pK,W)DSys(pX,pK,W).

A typical shape of {R>H(X)}Rc(pK,W) is shown in Figure 8.

Figure 8.

Figure 8

The inner bound RSys(in)(pX,pK,W) of the reliable and secure rate region RSys(pX,pKW).

6. Proofs of the Results

In this section, we prove our main theorem, i.e., Theorem 4.

6.1. Types of Sequences and Their Properties

In this subsection, we present basic results on the types. These results are basic tools for our analysis of several bounds related to the error provability of decoding or security.

Definition 5.

For any n-sequence xn=x1x2 xnXn, n(x|xn) denotes the number of t such that xt=x. The relative frequency n(x|xn)/nxX of the components of xn is called the type of xn denoted by Pxn. The set that consists of all the types on X is denoted by Pn(X). Let X¯ denote an arbitrary random variable whose distribution PX¯ belongs to Pn(X). For pX¯Pn(X), set TX¯n:=xn:Pxn=pX¯.

For sets of types and joint types, the following lemma holds. For details of the proof, see Csiszár and Körner [21].

Lemma 4.

  • (a)

    |Pn(X)|(n+1)|X|.

  • (b)
    For PX¯Pn(X),
    (n+1)|X|enH(X¯)|TX¯n|enH(X¯).
  • (c)
    For xnTX¯n,
    pXn(xn)=en[H(X¯)+D(pX¯||pX)].

By Lemma 4 parts (b) and (c), we immediately obtain the following lemma:

Lemma 5.

For pX¯Pn(X),

pXn(TX¯n)enD(pX¯||pX).

6.2. Upper Bounds of pe(ϕ(n),ψ(n)|pXn), and Δn(φ(n),φA(n)|pXn,pKn,Wn)

In this subsection, we evaluate upper bounds of pe(ϕ(n),ψ(n)|pXn) and Δn(φ(n),φA(n)|pXn,pKn,Wn). For pe(ϕ(n),ψ(n)|pXn), we derive an upper bound that can be characterized with a quantity depending on (ϕ(n),ψ(n)) and type Pxn of sequences xnXn. We first evaluate pe(ϕ(n),ψ(n)|pXn). For xnXn and pX¯Pn(X), we define the following functions:

Ξxn(ϕ(n),ψ(n)):=1ifψ(n)ϕ(n)(xn)xn,0otherwise,ΞX¯(ϕ(n),ψ(n)):=1|TX¯n|xnTX¯nΞxn(ϕ(n),ψ(n)).

Then we have the following lemma.

Lemma 6.

In the proposed system, for any pair of (ϕ(n), ψ(n)) , we have

pe(ϕ(n),ψ(n)|pXn)pX¯Pn(X)ΞX¯(ϕ(n),ψ(n))enD(pX¯||pX). (24)

Proof. 

We have the following chain of inequalities:

pe(ϕ(n),ψ(n)|pXn)=(a)pX¯Pn(X)xnTX¯nΞxn(ϕ(n),ψ(n))pXn(xn)=pX¯Pn(X)1|TX¯n|xnTX¯nΞxn(ϕ(n),ψ(n))|TX¯n|pXn(xn)=(b)pX¯Pn(X)1|TX¯n|xnTX¯nΞxn(ϕ(n),ψ(n))pXn(TX¯n)=(c)pX¯Pn(X)ΞX¯(ϕ(n),ψ(n))pXn(TX¯n)(d)pX¯Pn(X)ΞX¯(ϕ(n),ψ(n))enD(pX¯||pX).

Step (a) follows from the definition of Ξxn(ϕ(n),ψ(n)). Step (b) follows from the probabilities pXn(xn) for xnTX¯n taking an identical value. Step (c) follows from the definition of ΞX¯(ϕ(n),ψ(n)). Step (d) follows from Lemma 5. □

6.3. Random Coding Arguments

We construct a pair of affine encoders φ(n)=(φ1(n),φe(n)) using the random coding method. For the joint decoder ψ(n), we propose the minimum entropy decoder used in Csiszár [6] and Oohama and Han [22].

Random Construction of Affine Encoders: We first choose m such that

m:=nRlog|X|,

where a stands for the integer part of a. It is obvious that

R1nmnlog|X|R.

By definition (2) of ϕ(n), we have that for xnXn,

ϕ(n)(xn)=xnA,

where A is a matrix with n rows and m columns. By definition (3) of φ(n), we have that for knXn,

φ(n)(kn)=knA+bm,

where bm is a vector with m columns. Entries of A and bm are from the field of X. These entries are selected at random, independently of each other, and with a uniform distribution. Randomly constructed linear encoder ϕ(n) and affine encoder φ(n) have three properties shown in the following lemma.

Lemma 7 (Properties of Linear/Affine Encoders).

  • (a) 
    For any xn,vnXn with xnvn, we have
    Pr[ϕ(n)(xn)=ϕ(n)(vn)]=Pr[(xnvn)A=0m]=|X|m. (25)
  • (b) 
    For any snXn and for any s˜mXm, we have
    Pr[φ(n)(sn)=s˜m]=Pr[snAbm=s˜m]=|X|m. (26)
  • (c) 
    For any sn,tnXn with sntn, and for any s˜mXm, we have
    Pr[φ(n)(sn)=φ(n)(tn)=s˜m]=Pr[snAbm=tnAbm=s˜m]=|X|2m. (27)

Proof of this lemma is given in Appendix B. We next define the decoder function ψ(n):XmXn. To this end, we define the following quantities.

Definition 6.

For xnXn, we denote the entropy calculated from the type Pxn by H(xn). In other words, for a type PX¯Pn(X) such that PX¯=Pxn, we define H(xn)=H(X¯).

Minimum Entropy Decoder: For ϕ(n)(xn)=x˜m, we define the decoder function ψ(n):XmXn as follows:

ψ(n)(x˜m):=x^nif ϕ(n)(x^n)=x˜m,and H(x^n)<H(xˇn)for all xˇnsuch that ϕ(n)(xˇn)=x˜m,and xˇnx^n,arbitraryif there is no such x^nXn.

Error Probability Bound: In the following arguments, we let expectations based on the random choice of the affine encoder φ(n) be denoted by E[·]. Define

ΛX¯(R):=en[RH(X¯)]+.

Then we have the following lemma.

Lemma 8.

For any n and for anyPX¯Pn(X),

EΞX¯(ϕ(n),ψ(n))e(n+1)|X|ΛX¯(R).

Proof of this lemma is given in Appendix C.

Estimation of Approximation Error: Define

Θ(R,φA(n)|pKn,Wn):=(a,kn)MA(n)×XnpMA(n)Kn(a,kn)log1+(enR1)pKn|MA(n)(kn|a).

Then we have the following lemma.

Lemma 9.

For anyn,msatisfying(m/n)log|X|R, we have

EDpK˜m|MA(n)pVmpMA(n)Θ(R,φA(n)|pKn,Wn). (28)

Proof of this lemma is given in Appendix D. From the bound (28) in Lemma (9), we know that the quantity Θ(R,φA(n)|pKn,Wn) serves as an upper bound of the ensemble average of the conditional divergence D(pK˜m|MA(n)||pVm|pMA(n)). Hayashi [23] obtained the same upper bound of the ensemble average of the conditional divergence for an ensemble of universal2 functions. In this paper, we prove the bound (28) for an ensemble of affine encoders. To derive this bound, we need to use Lemma 7 parts (b) and (c), the two important properties that a class of random affine encoders satisfies. From Lemmas 1 and 9, we have the following corollary.

Corollary 2.

EΔn(φ(n),φA(n)|pXn,pKn,Wn)Θ(R,φA(n)|pKn,Wn).

Existence of Good Universal Code (φ(n),ψ(n)):

From Lemma 8 and Corollary 2, we have the following lemma stating the existence of a good universal code (φ(n),ψ(n)).

Lemma 10.

There exists at least one deterministic code(φ(n),ψ(n))satisfying(m/n)log|X|R, such that for anypX¯Pn(X),

ΞX¯(ϕ(n),ψ(n))e(n+1)|X|{(n+1)|X|+1}ΛX¯(R).

Furthermore, for anyφA(n)FA(n)(RA), we have

Δn(φ(n),φA(n)|pXn,pKn,Wn){(n+1)|X|+1}Θ(R,φA(n)|pKn,Wn).

Proof. 

We have the following chain of inequalities:

EpX¯Pn(X)ΞX¯(ϕ(n),ψ(n))e(n+1)|X|ΛX¯(R)+Δn(φ(n),φA(n)|pXn,pKn,Wn)Θ(R,φA(n)|pKn,Wn)=pX¯Pn(X)EΞX¯(ϕ(n),ψ(n))e(n+1)|X|ΛX¯(R)+EΔn(φ(n),φA(n)|pXn,pKn,Wn)Θ(R,φA(n)|pKn,Wn)(a)pX¯Pn(X)1+1=|Pn(X)|+1(b)(n+1)|X|+1.

Step (a) follows from Lemma 8 and Corollary 2. Step (b) follows from Lemma 4 part (a). Hence, there exists at least one deterministic code (φ(n),ψ(n)) such that

pX¯Pn(X)ΞX¯(ϕ(n),ψ(n))e(n+1)|X|ΛX¯(R)+Δn(φ(n),φA(n)|pXn,pKn,Wn)Θ(R,φA(n)|pKn,Wn)(n+1)|X|+1,

from which we have that

ΞX¯(ϕ(n),ψ(n))e(n+1)|X|ΛX¯(R)(n+1)|X|+1,

for any pX¯Pn(X). Furthermore, we have that for any φA(n)FA(n)(RA),

Δn(φ(n),φA(n)|pXn,pKn,Wn)Θ(R,φA(n)|pKn,Wn)(n+1)|X|+1,

completing the proof. □

Proposition 5.

For anyRA,R>0and any(pK,W), there exists a sequence of mappings{(φ(n),ψ(n))}n=1such that for anypXP(X), we have

R1n1nlog|Xm|=mnlog|X|R,pe(ϕ(n),ψ(n)|pXn)e(n+1)2|X|{(n+1)|X|+1}en[E(R|pX)] (29)

and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA), we have

Δ(n)(φ(n),φA(n)|pXn,pKn,Wn){(n+1)|X|+1}Θ(R,φA(n)|pKn,Wn). (30)

Proof. 

By Lemma 10, there exists (φ(n),ψ(n)) satisfying (m/n)log|X|R such that for any pX¯ Pn(X),

ΞX¯(ϕ(n),ψ(n))e(n+1)|X|{(n+1)|X|+1}ΛX¯(R). (31)

Furthermore, for any φA(n)FA(n)(RA),

Δn(φ(n),φA(n)|pXn,pKn,Wn){(n+1)|X|+1}Θ(R,φA(n)|pKn,Wn). (32)

The bound (30) in Proposition 5 has already been proven in (32). Hence, it suffices to prove the bound (29) in Proposition 5 to complete the proof. On an upper bound of pe(ϕ(n),ψ(n)|pXn), we have the following chain of inequalities:

pe(ϕ(n),ψ(n)|pXn)(a)e(n+1)|X|{(n+1)|X|+1}pX¯Pn(X)ΛX¯(R)enD(pX¯||pX)e(n+1)|X|{(n+1)|X|+1}|Pn(X)|en[E(R|pX)](c)e(n+1)2|X|{(n+1)|X|+1}enE(R|pX).

Step (a) follows from Lemma 6 and (31). Step (b) follows from Lemma 4 part (a). □

6.4. Explicit Upper Bound of Θ(R1,R2,φA(n)|pZK1K2n)

In this subsection, we derive an explicit upper bound of Θ(R,φA(n)|pKn,Wn) that holds for any eavesdropper A with φA satisfying φA(n)FA(n)(RA). Here we recall the following definitions:

η(n)=η(n)(R|pKn,Wn):=pMA(n)ZnKn{R1nlog1pKn|MA(n)(Kn|MA(n))η,ΦD,η(n)(RA,R|pKnWn):=maxφA(n)F(n)(RA)nRη(n)(R|pKn,Wn)+enη,ΦD(n)(RA,R|pKnWn):=infη>0ΦD,η(n)(RA,R|pKnWn).

Then we have the following lemma.

Lemma 11.

For anyη>0and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA), we have

Θ(R,φA(n)|pKn,Wn)ΦD,η(n)(RA,R|pKnWn), (33)

which implies that

Θ(R,φA(n)|pKn,Wn)ΦD(n)(RA,R|pKnWn). (34)

Proof. 

We first observe that

Θ(R,φA(n)|pKn,Wn)=Elog1+(enR1)pKn|MA(n)(Kn|MA(n)). (35)

We further observe the following:

R<1nlog1pKn|MA(n)(Kn|MA(n))ηenRpKn|MA(n)(Kn|MA(n))<enηlog1+enRpKn|MA(n)(Kn|MA(n))log1+enη(a)log1+enRpKn|MA(n)(Kn|MA(n))enηlog1+(enR1)pKn|MA(n)(Kn|MA(n))enη. (36)

Step (a) follows from log(1+a)a. We also note that

log1+(enR1)pKn|MA(n)(Kn|MA(n))log[enR]=nR. (37)

From (35), (36), and (37) we have the bound (33) in Lemma 11. □

Proof of Proposition 3:

This proposition immediately follows from Proposition 5 and Lemma 11. □

For the upper bound of η(n), we have the following lemma.

Lemma 12.

For anyη>0and for any eavesdropperAwithφAsatisfyingφA(n)FA(n)(RA), we haveη(n)˜η(n)+3enη, where

˜η(n):=pMA(n)ZnKn{
01nlogq^MA(n)ZnKn(MA(n),Zn,Kn)pMA(n)ZnKn(MA(n),Zn,Kn)η, (38)
01nlogqZn(Zn)pZn(Zn)η, (39)
RA1nlogpZn|MA(n)(Zn|MA(n))pZn(Zn)η,
R1nlog1pKn|MA(n)(Kn|MA(n))η}. (40)

The probability distributions appearing in the two inequalities (38) and (39) in the right members of (40) have a property that we can select them arbitrarily. In (38), we can choose any probability distributionq^MA(n)ZnKnonMA(n)×Zn×Xn. In (39), we can choose any distributionqZnonZn.

Proof of this lemma is given in Appendix E.

Proof of Proposition 4:

The claim of Proposition 4 is that for n1/R,

ΦD(n)(RA,R|pKnWn)5nRenF(RA,R|pK,W). (41)

By Lemma 12 and the definition of ΦD,η(n)(RA,R|pKnWn), we have that for n1/R,

ΦD,η(n)(RA,R|pKnWn)nR(˜η(n)+4enη). (42)

The quantity ˜η(n)+4enη is the same as the upper bound on the correct probability of decoding for one helper source coding problem in Lemma 1 in Oohama [9] (extended version). In a manner similar to the derivation of the exponential upper bound of the correct probability of decoding for one helper source coding problem, we can prove that for any φA(n)FA(n)(RA) and for some η*=η*(n,RA,R), we have

˜η*(n)+4enη*5enF(RA,R|pK,W). (43)

From (42), (43), and the definition of ΦD(n)(RA,R|pKnWn), we have (41). □

7. Conclusions

In this paper, we have proposed a novel security model for analyzing the security of Shannon cipher systems against an adversary that is not only eavesdropping the public communication channel to obtain ciphertexts but is also obtaining some physical information leaked by the device implementing the cipher system through side-channel attacks. We have also presented a countermeasure against such an adversary in the case of one-time pad encryption by using an affine encoder with certain properties. The main distinguishing feature of our countermeasure is that it is independent of the characteristics or the types of physical information leaked from the devices on which the cipher system is implemented.

Appendix A. Correct Probability of Decoding and Variational Distance

In this appendix, we prove Lemma 3.

For aMA(n), we set

D(a)=k˜m:k˜m=φ(n)(kn) and ψA(n)(k˜m,a)=kn for some knXn.

Then we have the following chain of inequalities:

dpVm×pMA(n),pK˜mMA(n)=aMA(n)pMA(n)(a)k˜mXmpK˜m|MA(n)(k˜m|a)1|X|maMA(n)pMA(n)(a)pK˜m|MA(n)D(a)|a|D(a)||X|m=aMA(n)pMA(n)(a)pK˜m|MA(n)D(a)|a1|X|m=pc,A(n)φ(n),φA(n),ψA(n)|pKn,Wn1|X|m,

completing the proof. □

Appendix B. Proof of Lemma 7

Let alm be the l-th low vector of the matrix A. For each l=1,2,,n, let AlmXm be a random vector that represents the randomness of the choice of almXm. Let BmXm be a random vector that represents the randomness of the choice of bmXm. We first prove part (a). Without loss of generality, we may assume x1v1. Under this assumption, we have the following:

(xnvn)A=0ml=1n(xlvl)alm=0ma1m=l=2nvlxlx1v1alm. (A1)

Computing Pr[ϕ(xn)=ϕ(vn)], we have the following chain of equalities:

Pr[ϕ(xn)=ϕ(vn)]=Pr[(ynwn)A=0m]=(a)Pra1m=l=2nwlylx1v1alm=(b)alml=2nX(n1)ml=2nPAlm(alm)PA1ml=2nwlxly1v1alm=|X|malml=2nX(n1)ml=2nPAlm(alm)=|X|m.

Step (a) follows from (A1). Step (b) follows from that n random vectors Alm,l=1,2,,n are independent. We next prove part b. We have the following:

snAbm=s˜mbm=s˜ml=1nslalm. (A2)

Computing Pr[snAbm=s˜m], we have the following chain of equalities:

Pr[snAbm=s˜m]=(a)Prbm=s˜ml=1nslalm=(b)alml=1nXnml=1nPAlm(alm)PBms˜ml=1nslalm=(c)|X|malml=1nXnml=1nPAlm(alm)=|X|m.

Step (a) follows from (A2). Step (b) follows from that n random vectors Alm,l=1,2,,n and Bm are independent. We finally prove the part (c). We first observe that sntn is equivalent to siti for some i{1,2,,n}. Without loss of generality, we may assume that s1t1. Under this assumption, we have the following:

snAbm=tnAbm=s˜m(sntn)A=0,bm=s˜ml=1nslalma1m=l=2ntlsls1t1alm,bm=s˜ml=1nslalma1m=l=2ntlsls1t1alm,bm=s˜ml=2nt1sls1tls1t1alm. (A3)

Computing Pr[snAbm=tnAbm=s˜m], we have the following chain of equalities:

Pr[snAbm=tnAbm=s˜m]=(a)Pra1m=l=2ntlsls1t1almbm=s˜ml=2nt1sls1tls1t1alm=(b)alml=2nX(n1)ml=2nPAlm(alm)PA1ml=2ntlsls1t1almPBms˜ml=2nt1sls1tls1t1alm=|X|2malml=2nX(n1)ml=2nPAlm(alm)=|X|2m.

Step (a) follows from (A3). Step (b) follows from the independent property on Alm,l=1,2,,n and Bm. □

Appendix C. Proof of Lemma 8

In this appendix, we provide the proof of Lemma 8.

For simplicity of notation, we write M=|X|m. For xnXn we set

B(xn)=(xˇn):H(xˇn)H(xn),Pxˇn=Pxn,

Using parts (a) and (b) of Lemma 4, we have following inequalities:

|B(xn)|(n+1)|X|enH(xn), (A4)

On an upper bound of E[Ξxn(ϕ(n),ψ(n))], we have the following chain of inequalities:

E[Ξxn(ϕ(n),ψ(n))]xˇnB(xn),xˇnxnPrϕ(n)(xˇn)=ϕ(n)(xn)(a)xˇnB(xn)1M=|B(xn)|M(b)e(n+1)|X|en[RH(xn)].

Step (a) follows from Lemma 7 part (a) and independent random constructions of linear encoders ϕ1(n) and ϕe(n). Step (b) follows from (A4) and MenR1,i=1,2. On the other hand we have the obvious bound E[Ξxn(ϕ(n),ψ(n))]1. Hence we have

E[Ξxn(ϕ(n),ψ(n))]e(n+1)|X|en[RH(xn)]+.

Hence we have

E[ΞX¯1X¯2(ϕ(n),ψ(n))]=E1|TX¯n|xnTX¯nΞxn(ϕ(n),ψ(n))=1|TX¯n|xnTX¯nE[Ξxn(ϕ(n),ψ(n))]e(n+1)|X|en[RH(X¯)]+,

completing the proof. □

Appendix D. Proof of Lemma 9

In this appendix, we prove Lemma 9. This lemma immediately follows from the following lemma:

Lemma A1.

For any n,m satisfying (m/n)log|X| R , we have

EDpK˜m|MA(n)pVmpMA(n)(a,kn)MA(n)×XnpMA(n)Kn(a,kn)log1+(|Xm|1)pKn|MA(n)(kn|a). (A5)

In fact, from |Xm|enR and (A5) in Lemma A1, we have the bound (28) in Lemma 9. Thus, we prove Lemma A1 instead of proving Lemma 9.

In the following arguments, we use the following simplified notations:

kn,KnXnk,KK,k˜m,K˜mXml,LL,φ(n):XnXmφ:KL,
φ(n)(kn)=knA+bmφ(k)=kA+b,VmXmVL,MA(n)MA(n)MM.

We define

χφ(k),l=1, if φ(k)=l,0, if φ(k)l.

Then, the conditional distribution of the random variable L=Lφ for given M=aM is

pL|M(l|a)=kKpK|M(k|a)χφ(k),l for lL.

Define

Υφ(k),l:=χφ(k),llog|L|kKpK|M(k|a)χφ(k),l.

Then the conditional divergence between pL|M and pV for given M is given by

DpL|MpVpM=(a,k)M×KlLpMK(a,k)Υφ(k),l. (A6)

The quantity Υφ(k),l has the following form:

Υφ(k),l=χφ(k),llog{|L|(pK|M(k|a)χφ(k),l+k{k}cpK|M(k|a)χφ(k),l. (A7)

The above form is useful for computing E[Υφ(k),l].

Proof of Lemma A1:

Taking the expectation of both sides of (A7) with respect to the random choice of the entry of the matrix A and the vector b representing the affine encoder φ, we have

EDpL|MpVpM=(a,k)M×KlLpMK(a,k)EΥφ(k),l. (A8)

To compute the expectation EΥφ(k),l, we introduce an expectation operator useful for the computation. Let Eφ(k)=lk[·] be an expectation operator based on the conditional probability measures Pr(·|φ(k)=lk). Using this expectation operator, the quantity EΥφ(k),l can be written as

EΥφ(k),l=lkLPrφ(k)=lkEφ(k)=lkΥlk,l. (A9)

Note that

Υlk,l=1,if lk=l,0,otherwise. (A10)

From (A9) and (A10), we have

EΥφ(k),l=Prφ(k)=lEφ(k)=lΥl,l=1|L|Eφ(k)=lΥl,l. (A11)

Using (A7), the expectation Eφ(k)=lΥl,l can be written as

Eφ(k)=lΥl,l=Eφ(k)=l[log{|L|(pK|M(k|a)+k{k}cpK|M(k|a)χφ(k),l. (A12)

Applying Jensen’s inequality to the right member of (A12), we obtain the following upper bound of Eφ(k)=lΥl,l:

Eφ(k)=lΥl,llog{|L|(pK|M(k|a)+k{k}cpK|M(k|a)Eφ(k)=lχφ(k),l=(a)log|L|pK|M(k|a)+k{k}cpK|M(k|a)1|L|=log1+(|L|1)pK|M(k|a). (A13)

Step (a) follows from that by Lemma 7 parts (b) and (c),

Eφ(k)=lχφ(k),l=Pr(φ(k)=l|φ(k)=l)=1|L|.

From (A8), (A11), and (A13), we have the bound (A5) in Lemma A1. □

Appendix E. Proof of Lemma 12

To prove Lemma 12, we prepare a lemma. For simplicity of notation, set |MA(n)|=MA. Define

Bn:=(a,zn,kn):1nlogpMA(n)ZnKn(a,zn,kn)q^MA(n)ZnKn(a,zn,kn)η.

Furthermore, define

C˜n:=zn:1nlogpZn(zn)qZn(zn)η,Cn:=C˜n×MA(n)×Xn,Cnc:=C˜nc×MA(n)×Xn,D˜n:={(a,zn):a=φA(n)(zn),pZn|MA(n)(zn|a)MAenηpZn(zn)},Dn:=D˜n×Xn,Dnc:=D˜nc×Xn,En:={(a,zn,kn):a=φA(n)(zn),pKn|MA(n)(kn|a)en(R+η)}.

Then we have the following lemma.

Lemma A2.

pMA(n)ZnKnBncenη,pMA(n)ZnKnCncenη,pMA(n)ZnKnDncenη.

Proof. 

We first prove the first inequality.

pMA(n)ZnKn(Bnc)=(a,zn,kn)BncpMA(n)ZnKn(a,zn,kn)(a)(a,zn,kn)Bncenηq^MA(n)ZnKn(a,zn,kn)=enηqMA(n)ZnKnBncenη.

Step (a) follows from the definition of Bn. For the second inequality we have

pMA(n)ZnKn(Cnc)=pZn(C˜nc)=xnC˜ncpZn(zn)(a)xnC˜ncenηqZn(zn)=enηqZnC˜ncenη.

Step (a) follows from the definition of Cn. We finally prove the third inequality.

pMA(n)ZnKn(Dnc)=pMA(n)Zn(D˜nc)=aMA(n)zn:φA(n)(zn)=apZn(zn)(enη/MA)×pZn|MA(n)(zn|a)pZn(zn)enηMAaMA(n)zn:φA(n)(zn)=apZn(zn)(enη/MA)×pZn|MA(n)(zn|a)pZn|MA(n)(zn|a)enηMA|MA(n)|=enη.

This completes the proof of Lemma A2. □

Proof of Lemma 12:

By definition, we have

pMA(n)ZnKnBnCnDnEn=pMA(n)ZnKn1nlogpMA(n)ZnKn(MA(n),Zn,Kn)q^MA(n)ZnKn(MA(n),Zn,Kn)η,01nlogqZn(Zn)pZn(Zn)η,1nlogMA1nlogpZn|MA(n)(Zn|MA(n))pZn(Zn)η,R1nlog1pKn|MA(n)(Kn|MA(n))η.

Then for any φA(n) satisfying (1/n)log||φA(n)||RA, we have

pMA(n)ZnKnBnCnDnEnpMA(n)ZnKn1nlogpMA(n)ZnKn(MA(n),Zn,Kn)q^MA(n)ZnKn(MA(n),Zn,Kn)η,01nlogqZn(Zn)pZn(Zn)η,RA1nlogpZn|MA(n)(Zn|MA(n))pZn(Zn)η,R1nlog1pKn|MA(n)(Kn|MA(n))η.

Hence, it suffices to show

η(n)pMA(n)ZnKnBnCnDnEn+3enη

to prove Lemma 12. We have the following chain of inequalities:

=(a)pMA(n)ZnKnEn=pMA(n)ZnKnBnCnDnEn+pMA(n)ZnKnBnCnDncEnpMA(n)ZnKnBnCnDnEn+pMA(n)ZnKnBnc+pMA(n)ZnKnCnc+pMA(n)ZnKnDnc(b)pMA(n)ZnKnBnCnDnEn+3enη=˜.

Step (a) follows from the defintion of ℘. Step (b) follows from Lemma A2. ☐

Author Contributions

Both the first and the second authors contributed for the writing of the original draft of this paper. Other contributions of the first author include (but are not limited to): the conceptualization of the research goals and aims, the validation of the results, the visualization/presentation of the works, the review and editing. Other contributions of the second author include (but are not limited to): the conceptualization of the ideas, research goals and aims, the formal analysis and the supervision.

Funding

This research was funded by Japan Society for the Promotion of Science (JSPS) Kiban (B) 18H01438 and Japan Society for the Promotion of Science (JSPS) Kiban (C) 18K11292.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  • 1.Brier E., Clavier C., Olivier F. Correlation Power Analysis with a Leakage Model. In: Joye M., Quisquater J.J., editors. International Workshop on Cryptographic Hardware and Embedded Systems. Springer; Berlin/Heidelberg, Germany: 2004. pp. 16–29. [Google Scholar]
  • 2.Quisquater J.J., Samyde D. ElectroMagnetic Analysis (EMA): Measures and Counter-Measures for Smart Cards. In: Attali I., Jensen T., editors. International Conference on Research in Smart Cards. Springer; London, UK: 2001. pp. 200–210. [Google Scholar]
  • 3.Kocher P.C. Annual International Cryptology Conference. Volume 1109. Springer; Berlin/Heidelberg, Germany: 1996. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems; pp. 104–113. [Google Scholar]
  • 4.Kocher P.C., Jaffe J., Jun B. Lecture Notes in Computer Science. Volume 1666. Springer; Berlin/Heidelberg, Germany: 1999. Differential Power Analysis; pp. 388–397. [Google Scholar]
  • 5.Agrawal D., Archambeault B., Rao J.R., Rohatgi P. The EM Side—Channel(s) In: Kaliski B.S., Koç ç.K., Paar C., editors. International Workshop on Cryptographic Hardware and Embedded Systems. Springer; Berlin/Heidelberg, Germany: 2003. pp. 29–45. [Google Scholar]
  • 6.Csiszár I. Linear Codes for Sources and Source Networks: Error Exponents, Universal Coding. IEEE Trans. Inform. Theory. 1982;28:585–592. [Google Scholar]
  • 7.Ahlswede R., Körner J. Source Coding with Side Information and A Converse for The Degraded Broadcast Channel. IEEE Trans. Inform. Theory. 1975;21:629–637. [Google Scholar]
  • 8.Wyner A.D. The Common Information of Two Dependent Random Variables. IEEE Trans. Inform. Theory. 1975;21:163–179. [Google Scholar]
  • 9.Oohama Y. Exponent function for one helper source coding problem at rates outside the rate region; Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT); Hong Kong. 14–19 June 2015; pp. 1575–1579. [Google Scholar]
  • 10.Watanabe S., Oohama Y. Privacy amplification theorem for bounded storage eavesdropper; Proceedings of the 2012 IEEE Information Theory Workshop (ITW); Bangalore, India. 20–25 October 2012; pp. 177–181. [Google Scholar]
  • 11.Coron J., Naccache D., Kocher P.C. Statistics and secret leakage. ACM Trans. Embed. Comput. Syst. 2004;3:492–508. [Google Scholar]
  • 12.Köpf B., Basin D.A. An information-theoretic model for adaptive side-channel attacks; Proceedings of the 2007 ACM Conference on Computer and Communications Security, CCS 2007; Alexandria, VA, USA. 28–31 January 2007; pp. 286–296. [Google Scholar]
  • 13.Backes M., Köpf B. European Symposium on Research in Computer Security. Volume 5283. Springer; Berlin/Heidelberg, Germany: 2008. Formally Bounding the Side-Channel Leakage in Unknown-Message Attacks; pp. 517–532. [Google Scholar]
  • 14.Micali S., Reyzin L. Theory of Cryptography Conference. Volume 2951. Springer; Berlin/Heidelberg, Germany: 2004. Physically Observable Cryptography (Extended Abstract) pp. 278–296. [Google Scholar]
  • 15.Standaert F., Malkin T., Yung M. Annual International Conference on the Theory and Applications of Cryptographic Techniques. Volume 5479. Springer; Berlin/Heidelberg, Germany: 2009. A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks; pp. 443–461. [Google Scholar]
  • 16.Wyner A.D. On Source Coding with Side Information at The Decoder. IEEE Trans. Inform. Theory. 1975;21:294–300. [Google Scholar]
  • 17.Oohama Y. Strong converse exponent for degraded broadcast channels at rates outside the capacity region; Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT); Hong Kong, China. 14–19 June 2015; pp. 939–943. [Google Scholar]
  • 18.Oohama Y. Strong converse theorems for degraded broadcast channels with feedback; Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT); Hong Kong, China. 14—19 June 2015; pp. 2510–2514. [Google Scholar]
  • 19.Oohama Y. New Strong Converse for Asymmetric Broadcast Channels. arXiv. 20161604.02901 [Google Scholar]
  • 20.Oohama Y. Exponential Strong Converse for Source Coding with Side Information at the Decoder. Entropy. 2018;20:352. doi: 10.3390/e20050352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Csiszár I., Körner J. Information Theory, Coding Theorems for Discrete Memoryless Systems. 2nd ed. Cambridge University Press; Cambridge, UK: 2011. [Google Scholar]
  • 22.Oohama Y., Han T.S. Universal coding for the Slepian-Wolf data compression system and the strong converse theorem. IEEE Trans. Inform. Theory. 1994;40:1908–1919. [Google Scholar]
  • 23.Hayashi M. Exponential Decreasing Rate of Leaked Information in Universal Random Privacy Amplification. IEEE Trans. Inform. Theory. 2011;57:3989–4001. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES