Skip to main content
Entropy logoLink to Entropy
. 2019 May 7;21(5):478. doi: 10.3390/e21050478

Distributed Hypothesis Testing with Privacy Constraints

Atefeh Gilani 1, Selma Belhadj Amor 2, Sadaf Salehkalaibar 1,*, Vincent Y F Tan 2
PMCID: PMC7514967  PMID: 33267192

Abstract

We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example.

Keywords: hypothesis testing, privacy, mutual information, testing against independence, zero-rate communication

1. Introduction

In the distributed hypothesis testing (or hypothesis testing with communication constraints) problem, some observations from the environment are collected by the sensors in a network. They describe these observations over the network which are finally received by the decision center. The goal is to guess the joint distribution governing the observations at terminals. In particular, there are two possible hypotheses H=0 or H=1, where the joint distribution of the observations is specified under each of them. The performance of this system is characterized by two criteria: the type-I and the type-II error probabilities. The probability of deciding on H=1 (respectively H=0) when the original hypothesis is H=0 (respectively H=1) is referred to as the type-I error (type-II error) probability. There are several approaches for defining the performance of a hypothesis test. First, we can maximize the exponent (exponential rate of decay) of the Bayesian error probability. Second, we can impose that the type-II error probability decays exponentially fast and we can then maximize the exponent of the type-I error probability; this is known as the Hoeffding regime. The approach in this work is the Chernoff-Stein regime in which we upper bound the type-II error probability by a non-vanishing constant and we maximize the exponent of the type-II error probability.

A special case of interest is testing against independence where the joint distribution under H=1 is the product of the marginals under H=0. The optimal exponent of type-II error probability for testing against independence is determined by Ahlswede and Csiszár in [1]. Several extensions of this basic problem are studied for a multi-observer setup [2,3,4,5,6], a multi-decision center setup [7,8] and a setup with security constraints [9]. The main idea of the achievable scheme in these works is typicality testing [10,11]. The sensor finds a jointly typical codeword with its observation and sends the corresponding bin index to the decision center. The final decision is declared based on typicality check of the received codeword with the observation at the center. We note that the coding scheme employed here is reminiscent of those used for source coding with side information [12] and for different variants of the information bottleneck problem [13,14,15,16].

1.1. Injecting Privacy Considerations into Our System

We revisit the distributed hypothesis testing problem from a privacy perspective. In many applications such as healthcare systems, there is a need to randomize the data before publishing it. For example, hospitals often have large amounts of medical records of their patients. These records are useful for performing various statistical inference tasks, such as learning about causes of a certain ailment. However, due to privacy considerations of the patients, the data cannot be published as is. The data needs to be sanitized, quantized, perturbed and then be fed to a management center before statistical inference, such as hypothesis testing, is being done.

In the proposed setup, we use a privacy mechanism to sanitize the observation at the terminal before it is compressed; see Figure 1. The compression is performed at a separate terminal called transmitter, which communicates the randomized data over a noiseless link of rate R to a receiver. The hypothesis testing is performed using the received data (the compression index and additional side information) to determine the correct hypothesis governing the original observations. The privacy criterion is defined by the mutual information [17,18,19,20] of the published and original data.

Figure 1.

Figure 1

Hypothesis testing with communication and privacy constraints.

There is a long history of research to provide appropriate metrics to measure privacy. To quantify the information leakage an observation X^ can induce on a latent variable X, Shannon’s mutual information I(X;X^) is considered in [17,18,19,20]. Smith [18] proposed to use Arimoto’s mutual information of order , I(X;X^). Barthe and Köpf [21,22,23] proposed the maximal information leakage maxPXI(X;X^). We refer the reader to [24] for a survey on the existing information leakage measures. A different line of works, in statistics, computer science, and other related fields, concerns differential privacy, initially proposed in [25]. Furthermore, a generalized notion—(ϵ,δ)-differential privacy [26]—provides a unified mathematical framework for data privacy. The reader is referred to the survey by Dwork [27] and the statistical framework studied by Wasserman and Zhou [28] and the references therein.

The privacy mechanism can be either memoryless or non-memoryless. In the former, the distribution of the randomized data at each time instant depends on the original sequence at the same time and not on the previous history of the data.

1.2. Description of Our System Model

We propose a coding scheme for the proposed setup. The idea is that the sensor, upon observing the source sequence, performs a typicality test and obtains its belief of the hypothesis. If the belief is H=0, it publishes the randomized data based on a specific memoryless mechanism. However, if its belief is H=1, it sends an all-zero sequence to let the transmitter know about its decision. The transmitter communicates the received data, which is a sanitized version of the original data or an all-zero sequence, over the noiseless link to the receiver. In this scheme, the whole privacy mechanism is non-memoryless since the typicality check of the source sequence which uses the history of the observation, determines the published data. It is shown that the achievable error exponent recovers previous results on hypothesis testing with zero and positive communication rates in [10]. Our work is related to a recent work [29] where a general hypothesis testing setup is considered from a privacy perspective. However, in [29], the problem at hand is different from ours. The authors consider equivocation and average distortion as possible measures of privacy whereas we constrain the mutual information between the original and released (published) data.

A difference of the proposed scheme with some previous works is highlighted as follows. The privacy mechanism even if it is memoryless, cannot be viewed as a noiseless link of a rate equivalent to the privacy criterion. Particularly, the proposed model is different from cascade hypothesis testing problem of [8] or similar works [3,4] which consider consecutive noiseless links for data compression and distributed hypothesis testing. The difference comes from the fact that in these works, a codeword is chosen jointly typical with the observed sequence at the terminal and its corresponding index is sent over the noiseless link. However, in our model, the randomized sequence is not necessarily jointly typical with the original sequence. Thus, there is a need for an achievable scheme which lets the transmitter know whether the original data is typical or not.

The problem of hypothesis testing against independence with a memoryless privacy mechanism is also considered. A coding scheme is proposed where the sensor outputs the randomized data based on the memoryless privacy mechanism. The optimality of the achievable type-II error exponent is shown by providing a strong converse. Specializing the optimal error exponent to a binary example shows that an increase in the privacy criterion (a less stringent privacy mechanism) results in a larger type-II error exponent. Thus, there exists a trade-off between privacy and hypothesis testing criteria. The optimal type-II error exponent is further studied for the case of restricted privacy mechanism and zero-rate communication. The Euclidean approach of [30,31,32,33] is used to approximate the error exponent for this regime. The result confirms the trade-off between the privacy criterion and type-II error exponent. Finally, a Gaussian setup is proposed and its optimal error exponent is established.

1.3. Main Contributions

The contributions of the paper are listed in the following:

  • An achievable type-II error exponent is proposed using a non-memoryless privacy mechanism (Theorem 1 in Section 3);

  • The optimal error exponent of testing against independence with a memoryless privacy mechanism is determined. In addition, a strong converse is also proved (Theorem 2 in Section 4.1);

  • A binary example is proposed to show the trade-off between the privacy and error exponent (Section 4.3);

  • An Euclidean approximation [30] of the error exponent is provided (Section 4.4);

  • A Gaussian setup is proposed and its optimal error exponent is derived (Proposition 2 in Section 4.5).

1.4. Notation

The notation mostly follows [34]. Random variables are denoted by capital letters, e.g., X, Y, and their realizations by lower case letters, e.g., x, y. The alphabet of the random variable X is denoted as X. Sequences of random variables and their realizations are denoted by (Xi,,Xj) and (xi,,xj) and are abbreviated as Xij and xij. We use the alternative notation Xj when i=1. Vectors and matrices are denoted by boldface letters, e.g., k, W. The 2-norm of k is denoted as k. The notation kT denotes the transpose of k.

The probability mass function (pmf) of a discrete random variable X is denoted as PX, the conditional pmf of X given Y is denoted as PX|Y. The notation D(PXQX) denotes the Kullback-Leibler (KL) divergence between two pmfs PX and QX. The total variation distance between two pmfs PX and QX is denoted by |PXQX|=12x|PX(x)QX(x)|. We use tp(xn,yn) to denote the joint type of (xn,yn).

For a given PXY and a positive number μ, we denote by Tμn(PXY), the set of jointly μ-typical sequences [34], i.e., the set of all (xn,yn) whose joint type is within μ of PXY (in the sense of total-variation distance). The notation Tn(PX) denotes for the type class of the type PX.

The notation hb(·) denotes the binary entropy function, hb1(·) its inverse over 0,12, and aba(1b)+(1a)b for 0a,b1. The differential entropy of a continuous random variable X is h(X). All logarithms log(·) are taken with respect to base 2.

1.5. Organization

The remainder of the paper is organized as follows. Section 2 describes a mathematical setup for our proposed problem. Section 3 discusses hypothesis testing with general distributions. The results for hypothesis testing against independence with a memoryless privacy mechanism are provided in Section 4. The paper is concluded in Section 5.

2. System Model

Let X, Y, and X^ be arbitrary finite alphabets and let n be a positive integer. Consider the hypothesis testing problem with communication and privacy constraints depicted in Figure 1. The first terminal in the system, the Randomizer, receives the sequence Xn=(X1,,Xn)Xn and outputs the sequence X^n=(X^1,,X^n)X^n, which is a noisy version of Xn under a privacy mechanism determined by the conditional probability distribution PX^n|Xn; the second terminal, the Transmitter, receives the sequence X^n; the third terminal, the Receiver, observes the side-information sequence Yn=(Y1,,Yn)Yn. Under the null hypothesis

H=0:(Xn,Yn)i.i.d.PXY, (1)

whereas under the alternative hypothesis

H=1:(Xn,Yn)i.i.d.QXY, (2)

for two given pmfs PXY and QXY.

The privacy mechanism is described by the conditional pmf PX^n|Xn which maps each sequence XnXn to a sequence X^nX^n. For any (x^n,xn,yn)X^n×Xn×Yn, the joint distributions considering the privacy mechanism are given by

PX^XYn(x^n,xn,yn)PX^n|Xn(x^n|xn)·i=1nPXY(xi,yi), (3)
QX^XYn(x^n,xn,yn)PX^n|Xn(x^n|xn)·i=1nQXY(xi,yi). (4)

A memoryless/local privacy mechanism is defined by a conditional pmf PX^|X which stochastically and independently maps each entry XiX of Xn to a released X^iX^ to construct X^n. Consequently, for the memoryless privacy mechanism, the conditional pmf PX^n|Xn(x^n|xn) factorizes as follows:

PX^n|Xn(x^n|xn)=i=1nPX^|X(x^i|xi)=PX^|Xn(x^n|xn),(x^n,xn)X^n×Xn. (5)

There is a noise-free bit pipe of rate R from the transmitter to the receiver. Upon observing X^n, the transmitter computes the message M=ϕ(n)(X^n) using a possibly stochastic encoding function ϕ(n):X^n{0,,2nR} and sends it over the bit pipe to the receiver.

The goal of the receiver is to produce a guess of H using a decoding function g(n):Yn×{0,...,2nR}{0,1} based on the observation Yn and the received message M. Thus the estimate of the hypothesis is H^=g(n)(Yn,M).

This induces a partition of the sample space X^n×Xn×Yn into an acceptance region An defined as follows:

An(x^n,xn,yn):g(n)(yn,ϕ(n)(x^n))=0, (6)

and a rejection region denoted by Anc.

Definition 1.

For any ϵ[0,1) and for a given rate-privacy pair (R,L)R+2, we say that a type-II exponent θR+ is (ϵ,R,L)-achievable if there exists a sequence of functions and conditional pmfs (ϕ(n),g(n),PX^n|Xn), such that the corresponding sequences of type-I and type-II error probabilities at the receiver are defined as

αnPX^XYn(Anc)andβnQX^XYn(An), (7)

respectively, and they satisfy

lim supnαnϵandlim infn1nlog1βnθ. (8)

Furthermore, the privacy measure

Tn1nI(Xn;X^n), (9)

satisfies

lim supnTnL. (10)

The optimal exponent θϵ*(R,L) is the supremum of all (ϵ,R,L)-achievable θR+.

3. General Hypothesis Testing

3.1. Achievable Error Exponent

The following presents an achievable error exponent for the proposed setup.

Theorem 1.

For a given ϵ[0,1) and a rate-privacy pair (R,L)R+2, the optimal type-II error exponent θϵ*(R,L) for the multiterminal hypothesis testing setup under the privacy constraint L and the rate constraint R satisfies

rClθϵ*(R,L)maxPU|X^,PX^|X:RI(U;X^)LI(X;X^)minP˜UX^XYPUX^XYD(P˜UX^XYPU|X^PX^|XQXY), (11)

where the set PUX^XY is defined as

PUX^XYP˜UX^XYP˜X=PX,P˜UY=PUY,P˜UX^=PUX^. (12)

Given PU|X^ and PX^|X, the mutual informations in (11) are calculated according to the following joint distribution:

PUX^|XPU|X^·PX^|X. (13)

Proof. 

The coding scheme is given in the following section. For the analysis, see Appendix A. □

3.2. Coding Scheme

In this section, we propose a coding scheme for Theorem 1, under fixed rate and privacy constraints (R,L)R+2. Fix the joint distribution PUX^XY as in (13). Let PU(u) be the marginal distribution of UU defined as

PU(u)x^X^PU|X^(u|x^)xXPX^X(x^,x). (14)

Fix positive μ>0 and ζ>0, an arbitrary blocklength n and two conditional pmfs PX^|X and PU|X^ over finite auxiliary alphabets X^ and U. Fix also the rate and privacy leakage level as

R=I(U;X^)+μ,andL=I(X^;X)+ζ. (15)

Codebook Generation: Randomly and independently generate a codebook

CUUn(m):m{0,,2nR}, (16)

by drawing Un(m) in an i.i.d. manner according to PU. The codebook is revealed to all terminals.

Randomizer: Upon observing xn, it checks whether xnTμ/4n(PX). If successful, it outputs the sequence x^n where its i-th component x^i is generated based on xi, according to PX^|X(x^i|xi). If the typicality check is not successful, the randomizer then outputs 0n which is an all-zero sequence of length n, where x^n=0n.

Transmitter: Upon observing x^n, if x^n0n, the transmitter finds an index m such that un(m),x^nTμ/2n(PUX^). If successful, it sends the index m over the noiseless link to the receiver. Otherwise, if the typicality check is not successful or x^n=0n, it sends m=0.

Receiver: Upon observing yn and receiving the index m, if m=0, the receiver declares H^=1. If m0, it checks whether un(m),ynTμn(PUY). If the test is successful, the receiver declares H^=0; otherwise, it sets H^=1.

Remark 1.

In the above scheme, the sequence X^n is chosen to be an n-length zero-sequence when the randomizer finds that Xn is not typical according to PX. Thus, the privacy mechanism is not memoryless and the sequence X^n is not identically and independently distributed (i.i.d.). A detailed analysis in Appendix A shows that the privacy criterion is not larger than L as the blocklength n.

3.3. Discussion

In the following, we discuss some special cases. First, suppose that R=0. The following corollary shows that Theorem 1 recovers Han’s result [1] for distributed hypothesis testing with zero-rate communication.

Corollary 1

(Theorem 5 in [10]). Suppose that QXY>0. For all ϵ[0,1), the optimal error exponent of the zero-rate communication for any privacy mechanism (including non-memoryless mechanisms) is given by the following:

θϵ*(0,L)=minP˜XY:P˜X=PXP˜Y=PYD(P˜XYQXY). (17)

Proof. 

The proof of achievability follows by Theorem 1, in which X^ is arbitrary and the auxiliary U= due to the zero-rate constraint. The proof of the strong converse follows along the same lines as [35]. □

Remark 2.

Consider the case of R>0 and L=0 where X^ is independent of X. Using Theorem 1, the optimal error exponent is lower bounded as follows:

θϵ*(R,0)minP˜XY:P˜X=PXP˜Y=PYD(P˜XYQXY). (18)

However, there is no known converse result in this case where the communication rate is positive. Comparing this special case with the one in Corollary 1 shows that the proposed model does not, in general, admit symmetry between the rate and privacy constraints. However, we will see from some specific examples in the following that the roles of R and L are symmetric.

Now, suppose that L is so large such that L>H(X). The following corollary shows that Theorem 1 recovers Han’s result in [10] for distributed hypothesis testing over a rate-R communication link.

Corollary 2

(Theorem 2 in [10]). Assuming L>H(X), the optimal error exponent is lower bounded as the following:

θϵ*(R,L)maxPU|X:RI(U;X)minP˜UXY:P˜UX=PUXP˜UY=PUYD(P˜UXYPU|XQXY). (19)

Proof. 

The proof follows from Theorem 1 by specializing to X^=X. □

The above two special cases reveal a trade-off between the privacy criterion and the achievable error exponent when the communication rate is positive, i.e., R>0. An increase in L results in a larger achievable error exponent. This observation is further illustrated by an example in Section 4.3 to follow.

4. Hypothesis Testing against Independence with a Memoryless Privacy Mechanism

In this section, we consider testing against independence where the joint pmf under H=1 factorizes as follows:

QXY=PX·PY. (20)

The privacy mechanism is assumed to be memoryless here.

4.1. Optimal Error Exponent

The following theorem, which includes a strong converse, states the optimal error exponent for this special case.

Theorem 2.

For any (R,L)R+2, define

θϵ*(R,L)=maxPU|X^,PX^|X:RI(U;X^)LI(X;X^)I(U;Y). (21)

Then, for any ϵ[0,1) and any (R,L)R+2, the optimal error exponent for testing against independence when using a memoryless privacy mechanism is given by (21), where it suffices to choose |U||X^|+1 and |X^||X| according to Caratheodory’s theorem [36] (Theorem 15.3.5).

Proof. 

The coding scheme is given in the following section. For the rest of proof, see Appendix B. □

4.2. Coding Scheme

In this section, we propose a coding scheme for Theorem 2. Fix the joint distribution as in (13), and the rate and privacy constraints as in (15). Generate the codebook CU as in (16).

Randomizer: Upon observing xn, it outputs the sequence x^n in which the i-th component x^i is generated based on xi, according to PX^|X(x^i|xi).

Transmitter: It finds an index m such that un(m),x^nTμ/2n(PUX^). If successful, it sends the index m over the noiseless link to the receiver. Otherwise, it sends m=0.

Receiver: Upon observing yn and receiving the index m, if m=0, the receiver declares H^=1. If m0, it checks whether un(m),ynTμn(PUY). If the test is successful, the receiver declares H^=0; otherwise, it sets H^=1.

Remark 3.

In the above scheme, the sequence X^n is i.i.d. since it is generated based on the memoryless mechanism PX^|X.

When the communication rate is positive, there exists a trade-off between the optimal error exponent and the privacy criterion. The following example elucidates this trade-off.

4.3. Binary Example

In this section, we study hypothesis testing against independence for a binary example. Suppose that under both hypotheses, we have XBern(12). Under the null hypothesis,

H=0:Y=XN,NBern(q) (22)

for some 0q1, where N is independent of X. Under the alternative hypothesis

H=1:YBern12, (23)

where Y is independent of X. The cardinality constraint shows that it suffices to choose |X^|=2. Among all possible privacy mechanisms, the choice of PX^|X(1|0)=PX^|X(1|1) and PX^|X(0|0)=PX^|X(0|1) minimizes the mutual information I(X;X^). Thus, we restrict to this choice which also results in X^Bern12.

The cardinality bound on the auxiliary random variable U is |U|3. The following proposition states that it is also optimal to choose PU|X^ to be a BSC.

Proposition 1.

The optimal error exponent of the proposed binary setup is given by the following:

θϵ*(R,L)=1hbqhb1(1L)hb1(1R). (24)

Proof. 

For the proof of achievability, choose the following auxiliary random variables:

X^=XZ^,Z^Bern(p1) (25)
U=X^Z,ZBern(p2), (26)

for some 0p1,p21 where Z^ and Z are independent of X and (X,X^), respectively. The optimal error exponent of Theorem 2 reduces to the following:

θϵ*(R,L)=max0p1,p21:R1hb(p2)L1hb(p1)1hb(qp1p2), (27)

which can be simplified to (24). For the proof of the converse, see Appendix C. □

Figure 2 illustrates the error exponent versus the privacy parameter L for a fixed rate R. There is clearly a trade-off between θϵ*(R,L) and L. For a less stringent privacy requirement (large L), the error exponent θϵ*(R,L) increases.

Figure 2.

Figure 2

θϵ*(R,L) versus L for q=0.1 and various values of R.

4.4. Euclidean Approximation

In this section, we propose Euclidean approximations [30,31] for the optimal error exponent of testing against independence scenario (Theorem 2) when R0 and L0. Consider the optimal error exponent as follows:

θϵ*(R,L)=maxPU|X^,PX^|X:RI(U;X^)LI(X;X^)I(U;Y). (28)

Let W of dimension |Y|×|X|, denote the transition matrix PY|X, which is itself induced by PX and the joint distribution PXY. Now, consider the rate constraint as follows:

I(U;X^)=uUPU(u)DPX^|U(·|u)PX^R. (29)

Assuming R0, we let PX^|U(·|u) be a local perturbation from PX^(·), where we have

PX^|U(·|u)=PX^(·)+ψu(·), (30)

for a perturbation ψu(·) satisfying

x^X^ψu(x^)=0, (31)

in order to preserve the row stochasticity of PX^|U. Using a χ2-approximation [30], we can write:

DPX^|U(·|u)PX^12·loge·vu2, (32)

where vu denotes the length-|X^| column vector of weighted perturbations whose x^-th component is defined as:

vu(x^)1PX^(x^)·ψu(x^),x^X^. (33)

Using the above definition, the rate constraint in (29) can be written as:

uUPU(u)vu22Rloge. (34)

Similarly, consider the privacy constraint as the following:

I(X;X^)=x^X^PX^(x^)DPX|X^(·|x^)PXL. (35)

Assuming L0, we let PX|X^(·|x^) be a local perturbation from PX(·) where

PX|X^(·|x^)=PX(·)+ϕx^(·), (36)

for a perturbation ϕx^(·) that satifies:

xXϕx^(x)=0. (37)

Again, using a χ2-approximation, we obtain the following:

DPX|X^(·|x^)PX12logevx^2, (38)

where vx^ is a length-|X| column vector and its x-th component is defined as follows:

vx^(x)1PX(x)·ϕx^(x),xX. (39)

Thus, the privacy constraint in (35) can be written as:

x^X^PX^(x^)vx^22Lloge. (40)

For any xX and uU, we define the following:

(41)Λu(x)x^X^ψu(x^)ϕx^(x)(42)=PX(x)x^X^PX^(x^)vu(x^)vx^(x),

and the corresponding length-|X| column vector Λu defined as follows:

Λu=PXVX^PX^vu, (43)

where PX denotes a diagonal |X|×|X|-matrix, so that its (x,x)-th element (xX) is PX(x), and PX^ is defined similarly. Moreover, VX^ refers to the |X|×|X^|-matrix defined as follows:

VX^v1v2vx^v|X^|. (44)

Let PY1 be the inverse of diagonal |Y|×|Y|-matrix PY. As shown in Appendix D, the optimization problem in (28) can be written as follows:

max{vu}uU,VX^:PX^(x^)vu(x^)1PX^(x^)PX^(x^)PX(x)vx^(x)1PX(x)PX(x)12logeuUPU(u)·PY1WPXVX^PX^Vu2 (45)
subjectto:uUPU(u)vu22Rloge, (46)
x^X^PX^(x^)vx^22Lloge. (47)

The following example specializes the above approximation to the binary case.

Example 1.

Consider the binary setup of Example 4.3 and the choice of auxiliary random variables in (26). Since the privacy mechanism is assumed to be a BSC, we have

PX=1212T,PX^=1212T, (48)

Now, we consider the vectors vu=0 and vu=1 defined as

vu=0=2ξ12ξ1T, (49)
vu=1=2ξ12ξ1T. (50)

for some positive ξ1. This yields the following:

PX^|U=0=PX^+ξ1ξ1T, (51)
PX^|U=1=PX^+ξ1ξ1T (52)

We also choose the vectors vx^=0 and vx^=1 as follows:

vx^=0=2ξ22ξ2T, (53)
vx^=1=2ξ22ξ2T, (54)

which results in

PX|X^=0=PX+ξ2ξ2T, (55)
PX|X^=1=PX+ξ2ξ2T. (56)

Notice that the matrix W is given by

W=1qqq1q. (57)

Thus, the optimization problem in (45) and (47) reduces to the following:

max12ξ1,ξ2128loge(12q)2|ξ1|2|ξ2|2 (58)
subjectto:4|ξ1|22Rlogeand4|ξ2|22Lloge. (59)

Solving the above optimization yields

θϵ*(R0,L0)2loge(12q)2RL. (60)

For some values of parameters, the approximation in (60) is compared to the error exponent of (24) in Figure 3. We observe that when R=L0, the approximation turns out to be excellent.

Figure 3.

Figure 3

θϵ*(R0,L0) versus L for q=0.1 and R=L.

Remark 4.

The trade-off between the optimal error exponent and the privacy can again be verified from (60) in the case of L0 and R0. As L becomes larger (which corresponds to a less stringent privacy requirement), the error exponent also increases. For a fixed error exponent, a trade-off between R and L exists. An increase in R results in a decrease of L.

4.5. Gaussian Setup

In this section, we consider hypothesis testing against independence over a Gaussian example. Suppose that XN(0,1) and under the null hypothesis H=0, the sources X and Y are jointly Gaussian random variables distributed as N(0,GXY), where GXY is defined as the following:

GXY=Δ1ρρ1, (61)

for some 0ρ1.

Under the alternative hypothesis H=1, we assume that X and Y are independent Gaussian random variables, each distributed as N(0,1). Consider the privacy constraint as follows:

LI(X;X^)=h(X)h(X|X^). (62)

For a Gaussian source X, the conditional entropy h(X|X^) is maximized for a jointly Gaussian (X,X^). This choice minimizes the RHS of (62). Thus, without loss of optimality, we choose

X=X^+Z,ZN0,22L, (63)

where Z is independent of X^. The following proposition states that it is optimal to choose U jointly Gaussian with (X,X^,Y).

Proposition 2.

The optimal error exponent of the proposed Gaussian setup is given by

θϵ*(R,L)=12log11ρ2·(122R)·(122L). (64)

Proof. 

For the proof of achievability, we choose X^ as in (63). Also, let

X^=U+Z^,Z^N(0,β2), (65)

for some β20, where Z^ is independent of U. It can be shown that Theorem 2 remains valid when it is extended to the continuous alphabet [5]. For the details of the simplification and also the proof of converse, see Appendix E. □

Remark 5.

If L=, the above proposition recovers the optimal error exponent of Rahman and Wagner [5] (Corollary 7) for testing against independence of Gaussian sources over a noiseless link of rate R.

5. Summary and Discussion

In this paper, distributed hypothesis testing with privacy constraints is considered. A coding scheme is proposed where the sensor decides on one of hypotheses and generates the randomized data based on its decision. The transmitter describes the randomized data over a noiseless link to the receiver. The privacy mechanism in this scheme is non-memoryless. The special case of testing against independence with a memoryless privacy mechanism is studied in detail. The optimal type-II error exponent of this case is established, together with a strong converse. A binary example is proposed where the trade-off between the privacy criterion and the error exponent is reported. Euclidean approximations are provided for the case in which the privacy level is high and and the communication rate is vanishingly small. The optimal type-II error exponent of a Gaussian setup is also established.

A future line of research is to study the second-order asymptotics of our model. The second-order analysis of distributed hypothesis testing without privacy constraints and with zero-rate communication was studied in [37]. In our proposed model, the trade-off between the privacy and type-II error exponent is observed, i.e., a less stringent privacy requirement yields a larger error exponent. The next step is to see whether the trade-off between privacy and error exponent affects the second-order term.

Another potential line for future research is to consider other metrics of privacy instead of the mutual information. A possible candidate is to use the maximal leakage [21,22,23] and to analyze the performance in tandem with distributed hypothesis testing problem.

Acknowledgments

The authors would like to thank Lin Zhou (National University of Singapore) and Daming Cao (Southeast University) for helpful discussions during the preparation of the manuscript.

Appendix A. Proof of Theorem 1

The analysis is based on the scheme described in Section 3.2.

Error Probability Analysis: We analyze the type-I and type-II error probabilities averaged over all random codebooks. By standard arguments [36] (p. 204), it can be shown that there exists at least one codebook that satisfies constraints on the error probabilities.

For the considered μ>0 and the considered blocklength n, let Pμn be the set of all joint types πUX^XY over Un×X^n×Xn×Yn which satisfy the following constraints:

|πXPX|μ/4, (A1)
|πUX^PUX^|μ/2, (A2)
|πUYPUY|μ. (A3)

First, we analyze the type-I error probability. For the case of M0, we define the following event:

E(Un(M),Yn)Tμn(PUY). (A4)

Thus, type-I error probability can be upper bounded as follows:

(A5)αnPrX^n=0norM=0orE|H=0(A6)PrX^n=0n|H=0+PrM=0|X^n0n,H=0+PrE|M0,X^n0n,H=0(A7)ϵ/3+PrM=0|X^n0n,H=0+PrE|M0,X^n0n,H=0(A8)ϵ/3+ϵ/3+PrE|M0,X^n0n,H=0(A9)ϵ/3+ϵ/3+ϵ/3(A10)=ϵ,

where (A7) follows from AEP [36] (Theorem 3.1.1); (A8) follows from the covering lemma [34] (Lemma 3.3) and the rate constraint (15), (A10) follows from Markov lemma [34] (Lemma 12.1). In all justifications, n is taken to be sufficiently large.

Next, we analyze the type-II error probability. The acceptance region at the receiver is

AnRx=mx^n,xn,yn:x^n0n,un(m),x^n,xn,ynTμn(PUX^XY). (A11)

The set AnRx is contained within the following acceptance region A¯n:

A¯n=mx^n,xn,yn:x^n0n,un(m),x^n,xn,ynπPμnTn(π). (A12)

Let Fm{Un(m),X^n,Xn,YnPμn}. Therefore, the average of type-II error probability over all codebooks is upper bounded as follows:

(A13)EC[βn]QX^XYnA¯n(A14)mPrX^n0n,Fm|H=1(A15)mPrFm|X^n0n,H=1(A16)2nR·(n+1)|U|·|X^|·|X|·|Y|·maxπUX^XYPμn2nD(πUX^XYPUPX^|XQXY)(A17)=(n+1)|U|·|X^|·|X|·|Y|·2nθ˜μ,

where

θ˜μminπUX^XYPμnD(πUX^XYPUPX^|XQXY)R, (A18)

and (A16) follows from the upper bound of Sanov’s theorem [36] (Theorem 11.4.1). Hence,

(A19)θ˜μ=minπUX^XYPμnD(πUX^XYPUPX^|XQXY)R(A20)=minπUX^XYPμnD(πUX^XYPUPX^|XQXY)I(U;X^)μ(A21)=minπUX^XYPμnD(πUX^XYPU|X^PX^|XQXY)+δ(μ),

where δ(μ)0 as μ0. Equality (A20) follows from the rate constraint in (15) and (A21) holds because |πUX^PUX^|<μ/2.

Privacy Analysis: We first analyze the privacy under H=0. Notice that X^n is not necessarily i.i.d. because according to the scheme in Section 3.2, X^n is forced to be an all-zero sequence if the Randomizer decides that Xn is not typical. However, conditioned on the event that XnTμn(PX), the sequence X^n is i.i.d. according to the conditional pmf PX^|X. The privacy measure Tn satisfies

nTn=I(Xn;X^n)=H(X^n)H(X^n|Xn). (A22)

We now provide a lower bound on H(X^n|Xn) as follows

H(X^n|Xn)xnTμn(PX)PXn(xn)H(X^n|Xn=xn) (A23)

For any xnTμn(PX) and for μ>μ, it holds that

(A24)H(X^n|Xn=xn)=x^nX^nPX^|Xn(x^n|xn)logPX^|Xn(x^n|xn)(A25)x^nTμn(PX^|X(·|xn))PX^|Xn(x^n|xn)logPX^|Xn(x^n|xn)(A26)x^nTμn(PX^|X(·|xn))PX^|Xn(x^n|xn)log2n(1μ)H(X^|X)(A27)n(1μ)2H(X^|X)

where (A26) is true because for any x^nTμn(PX^|X(·|xn)), it holds that PX^|Xn(x^n|xn)2n(1μ)H(X^|X), and (A27) follows because the conditional typicality lemma [34] (Chapter 2) implies that PX^|Xn(Tμn(PX^|X(·|xn)|xn)1μ for n sufficiently large.

Combining (A23) and (A27), we obtain

(A28)H(X^n|Xn)n(1μ)2H(X^|X)xnTμn(PX)PXn(xn)(A29)n(1μ)2(1μ)H(X^|X),

where (A29) follows because the AEP [36] (Theorem 3.1.1) implies that PXn(Tμn(PX))1μ for n sufficiently large.

Hence, we have

(A30)I(Xn;X^n)=H(X^n)H(X^n|Xn)(A31)nH(X^)H(X^n|Xn)(A32)nH(X^)n(1μ)H(X^|X)(A33)=nI(X;X^)+nμH(X^|X)(A34)nL+nμ·log|X^|(A35)=nL+nζ,

where μ1(1μ)2(1μ)0, and ζμ·log|X^|.

Next, consider the privacy analysis under H=1. Please note that when PX=QX, the analysis is similar to that of H=0. Thus, we assume that PXQX in the following. From (A22), the privacy measure Tn satisfies:

nTn=I(Xn;X^n)H(X^n). (A36)

To upper bound H(X^n), we calculate the probability PX^n(x^n) for x^n=0n as follows:

(A37)PX^n(0n)=xnTμn(PX)PX^|Xn(0n|xn)·QXn(xn)+xnTμn(PX)PX^|Xn(0n|xn)·QXn(xn)(A38)=xnTμn(PX)PX^|Xn(0n|xn)·QXn(xn)(A39)=xnTμn(PX)QXn(xn)(A40)=1QXnTμn(PX)(A41)12n(D(PXQX)+δ(μ))1γn,

where γn0 exponentially fast as n. Here, (A38) follows because if xnTμn(PX), then PX^|Xn(0n|xn)=0, (A39) follows because when xnTμn(PX), then PX^|Xn(0n|xn)=1 and (A41) follows from Sanov’s theorem and the continuity of the relative entropy in its first argument [38] (Lemma 1.2.7).

Write H(X^n) as H(PX^n) and let P0n be the distribution on X^n that places all its probability mass on 0nX^n. Since H(P0n)=0, by the uniform continuity of entropy [38] (Lemma 1.2.7),

H(PX^n)2|PX^nP0n|·log|X^|n2|PX^nP0n|. (A42)

Since γn0 exponentially fast, the same holds true for |PX^nP0n| and so by (A42), H(PX^n)=H(X^n)0. Therefore, under H=1, we have Tn0 as n.

Letting n and then letting μ,μ,γ0, we obtain θ˜μθ and lim supnnTnL, with θ given by the RHS of (11). This establishes the proof of Theorem 1.

Appendix B. Proof of Theorem 2

Achievability: The analysis is based on the scheme of Section 4.2. It follows similar steps as in [1]. Recall the definition of the event E in (A4). Consider the type-I error probability as follows:

(A43)αnPrM=0orE|H=0(A44)PrM=0|H=0+PrE|M0,H=0(A45)ϵ/2+ϵ/2(A46)=ϵ,

where (A46) follows from covering lemma [34] (Lemma 3.3) and the rate constraint in (15), and also the Markov lemma [34] (Lemma 12.1). Now, consider the type-II error probability as follows:

(A47)βn=Pr[H^=0|H=1](A48)=Pr[H^=0,M0|H=1](A49)Pr[H^=0|H=1,M0](A50)=Pr[H^=0|H=1,M=1],

where the last equality follows from the symmetry of the code construction. Now, the average of type-II error probability over all codebooks satisfies:

ECβn2n[I(U;Y)δ(μ)], (A51)

where δ(μ) is a function that tends to zero as μ0. The privacy analysis is straightforward since the privacy mechanism is memoryless whence we have

1nI(Xn;X^n)=I(X;X^)=L+ζ, (A52)

where the last equality follows from the privacy constraint in (15). This concludes the proof of achievability.

Converse: Now, we prove the strong converse. It involves an extension of the η-image characterization technique [4,38]. The proof steps are given as follows. First, we find a truncated distribution PX^_n which is arbitrarily close to PX^n in terms of entropy. Then, we analyze the type-II error probability under a constrained type-I error probability. Finally, a single-letter characterization of the rate and privacy constraints is given.

  • (1)

    Construction of a Truncated Distribution:

Since the privacy machanism is memoryless, we conclude that (Xn,X^n,Yn) is i.i.d. according to PXX^YPX^|XPXY. For a given PX^Y, define Vn(yn|x^n)PY|X^n(yn|x^n) for all x^nX^n and ynYn. A set BYn is an η-image of the set AX^n over the channel Vn if

VnB|x^nη,x^nA. (A53)

The privacy mechanism is the same under both hypotheses, thus, we can define the acceptance region based on (x^n,yn) as follows:

An(x^n,yn):g(n)(yn,ϕ(n)(x^n))=0. (A54)

For any encoding function ϕ(n) and an acceptance region AnX^n×Yn, let τn denote the cardinality of codebook and define the following sets:

Ci=Δx^nX^n:ϕ(n)(x^n)=i, (A55)
Di=ΔynYn:g(n)(yn,i)=0,1iτn. (A56)

The acceptance region can be written as follows:

An=i=1τnCi×Di, (A57)

where CiCj=ϕ for all ij. Define the set Bn(η) as follows:

Bn(η)x^n:VnDϕ(n)(x^n)|x^nη. (A58)

Fix ϵ[0,1) and notice that the type-I error probability is upper-bounded as

αn=PX^YnAncϵ, (A59)

which we can write equivalently as

(A60)1ϵPX^YnAn(A61)=x^nBn(η)PX^n(x^n)VnDϕ(n)(x^n)|x^n+x^nBnc(η)PX^n(x^n)VnDϕ(n)(x^n)|x^n(A62)PX^nBn(η)+η1PX^n(Bn(η)),

where the first term is because VnDϕ(n)(x^n)|x^n1; and the second term is because for any x^nBnc(η), we have VnDϕ(n)(x^n)|x^n<η.

In what follows, let η=1ϵ2. Inequality (A62) implies

PX^n(Bn(η))1ϵ1+ϵ. (A63)

Let μn=n1/3. For the typical set Tμnn(PX^), we have

PX^nTμnn(PX^)1|X^|2μnn. (A64)

Hence,

(A65)PX^nTμnn(PX^)Bn(η)PX^nTμnn(PX^)+PX^nBn(η)1(A66)1ϵ1+ϵ|X^|2μnn.

For any 0<δ<1ϵ1+ϵ and for sufficiently large n,

PX^nTμnn(PX^)Bn(η)δ. (A67)

We can also write Tμnn(PX^) as

Tμnn(PX^)=P^X^:|P^X^PX^|μnTn(P^X^). (A68)

Combining the above equations, we get

P^X^:|P^X^PX^|μnPX^nTn(P^X^)Bn(η)δ. (A69)

Let P˜X^ denote the type which maximizes the PX^n-probability of the type class among all such types. As there exist at most (n+1)|X^| possible types, it holds that

PX^nTn(P˜X^)Bn(η)δ(n+1)|X^|. (A70)

Define the set Ψn(η)Tn(P˜X^)Bn(η). We can write the probability in (A70) as

(A71)PX^nTn(P˜X^)Bn(η)=x^nΨn(η)PX^n(x^n)(A72)=x^nΨn(η)2nD(P˜X^PX^)+HP˜X^(X^)(A73)x^nΨn(η)2n[H(X^)δ1]

where δ10 as n due to the fact that D(P˜X^PX^)0 and |P˜X^PX^|μn so the entropies are also arbitrarily close. It then follows from (A70) and (A73) that

1nlog|Ψn(η)|H(X^)δ2, (A74)

where δ20 as μn0.

The encoding function ϕ(n) partitions the set Ψn(η) into τn non-intersecting subsets {Si}i=1τn such that ϕ(n)(x^n)=i for any x^nSi. Define the following distribution:

PX^_n(x^n)PX^n(x^n)·1x^nΨn(η)PX^n(Ψn(η)). (A75)

Please note that this distribution, henceforth denoted as P(n), corresponds to a uniform distribution over Ψn(η) because all sequences in Ψn(η) have the same type P˜X^, and the probability is uniform on a type class under any i.i.d. measure.

Finally, define the following truncated joint distribution:

PM_X_nX^_nY_n(m,xn,x^n,yn)1ϕ(n)(xn)=mPX|X^n(xn|x^n)PX^_n(x^n)PY|Xn(yn|xn). (A76)
  • (2)

    Analysis of Type-II Error Exponent:

The proof of the upper bound on the error exponent relies on the following Lemma A1. For a set AX^n, let B(A,η) denote the collection of all η-images of A, define QXX^YPX^|XQXY and

κVn(A,QX^Y,η)minBB(A,η)QX^Yn(A×B)PX^n(A). (A77)

This quantity is a generalization of the minimum cardinality of the η-images in [38] and is closely related to the minimum type-II error probability associated with the set A.

For the testing against independence setup, QX^Y=PX^·PY, and thus

QX^Yn(A×B)PX^n(A)=PX^n(A)PYn(B)PX^n(A)=PYn(B), (A78)

and κVn(A,QX^Y,η) is simply written as κVn(A,η) and is given by

κVn(A,η)minBB(A,η)PYn(B). (A79)

Lemma A1

(Lemma 3 in [4]). For any set AX^n, consider a distribution PA(n) over A and let PA(n)Vn be its corresponding output distribution induced by the channel Vn, i.e.,

PA(n)Vn(yn)x^nAPA(n)(x^n)Vnyn|x^n. (A80)

Then, for every δ>0, 0<η<1, we have

κVn(A,η)2D(PA(n)VnPYn)nδ (A81)

for sufficiently large n.

Let Pi(n)Vn be the distribution of the random variable Y_n given M_=i. The type-II error probability can be lower-bounded as:

(A82)βnx^nΨn(η)PX^nx^n·PYnDϕ(n)(x^n)(A83)=i=1τnPX^n(Si)·PYn(Di)(A84)i=1τnPX^n(Si)·κVn(Si,η)(A85)=PX^nΨn(η)·i=1τnP(n)(Si)·κVn(Si,η)(A86)2nδ·PX^nΨn(η)·i=1τnP(n)(Si)·2DPi(n)VnPYn(A87)2nδ·PX^nΨn(η)·2i=1τnP(n)(Si)·DPi(n)VnPYn(A88)2nδδ(n+1)|X^|·2i=1τnP(n)(Si)·DPi(n)VnPYn,

where (A84) follows from the definition of κVn(Si,η), (A86) follows because Lemma A1 implies that for any distribution Pi(n) over the set Si it holds that κVn(Si,η)2DPi(n)VnPYnnδ, (A87) follows because of the convexity of the function t2t, and (A88) follows by (A70) and the fact that Pr(A)Pr(AB). Hence,

1nlogβnδ1ni=1τnP(n)(Si)·DPi(n)VnPYn, (A89)

where δδ1nlogδ(n+1)|X^|.

  • (3)

    Single-letterization Steps and Analyses of Rate and Privacy Constraints:

In the following, we proceed to provide a single-letter characterization of the upper bound in (A89). Considering the fact that P(n)(Si)=PM_(i), the right-hand-side of (A89) can be upper-bounded as follows:

(A90)1ni=1τnP(n)(Si)·D(Pi(n)VnPYn)=1ni=1τnynYnPM_Y_n(i,yn)logPY_n|M_(yn|i)PYn(yn)(A91)=1nH(Y_n|M_)1nynYnPY_n(yn)logPYn(yn)(A92)=1nH(Y_n|M_)1nynYnPY_n(yn)t=1nlogPY(yt)(A93)=1nH(Y_n|M_)1nt=1nynYnPY_n(yn)logPY(yt)(A94)=1nH(Y_n|M_)1nt=1nytYPY_t(yt)logPY(yt)(A95)=1nH(Y_n|M_)+1nt=1nH(Y_t)+D(PY_tPY)(A96)=1nt=1nH(Y_t)H(Y_t|M_,Y_t1)+D(PY_tPY)(A97)1nt=1nI(M_,X_t1,X^_t1;Y_t)+1nt=1nD(PY_tPY)(A98)=1nt=1nI(U_t;Y_t)+1nt=1nD(PY_tPY)(A99)=I(U_;Y_)+D(PY_PY).

Here, (A96)–(A99) are justified in the following:

  • (A96) follows by the chain rule;

  • (A97) follows from the Markov chain Y_t1(M_,X_t1,X^_t1)Y_t;

  • (A98) follows from the definition
    U_t(M_,X_t1,X^_t1); (A100)
  • (A99) follows by defining a time-sharing random variable T over {1,,n} and the following
    U_=Δ(U_T,T),Y_=ΔY_T. (A101)

This leads to the following upper-bound on the type-II error exponent:

1nlogβnI(U_;Y_)+D(PY_PY)+δ. (A102)

Next, the rate constraint satisfies the following:

(A103)nRH(M_)(A104)I(M_;X^_n)(A105)=H(X^_n)H(X^_n|M_)(A106)=log|Ψn(η)|H(X^_n|M_)(A107)n(H(X^)δ2)H(X^_n|M_)(A108)=nH(X^)t=1nH(X^_t|X^_t1,M_)nδ2(A109)=nH(X^)t=1nH(X^_t|U_t)nδ2(A110)=nH(X^)nH(X^_|U_)nδ2

where (A106) follows because the distribution PX^_n is uniform over the set Ψn(η); (A107) follows from (A74); (A109) follows from the definition in (A100); (A110) follows by defining X^_X^_T.

Finally, the privacy measure satisfies the following:

(A111)nLI(Xn;X^n)(A112)I(X_n;X^_n)(A113)=H(X^_n)H(X^_n|X_n)(A114)=log|Ψn(η)|H(X^_n|X_n)(A115)=nH(X^)δ2H(X^_n|X_n)(A116)=nH(X^)δ2t=1nH(X^_t|X^_t1,X_n)(A117)nH(X^)δ2t=1nH(X^_t|X_t)(A118)=nH(X^)nH(X^_|X_)nδ2,

where (A112) follows because (X_n,X^_n) are functions of (Xn,X^n) and from data processing inequality, (A114) follows because PX^_n is uniform over the set Ψn(η) (see definition in (A75)), (A115) follows from (A74), and (A118) follows by defining X_(X_T,T), X^_(X^_T,T) and choosing T uniformly over {1,,n}.

Since Ψn(η)Tn(P˜X^), for any x^X^,

(A119)PX^_(x^)=1nt=1nPX^_t(x^)(A120)=x^nΨn(η)Nx^|x^nn·|Ψn(η)|(A121)=P˜X^(x^).

Recall that |P˜X^PX^|μn with μn=n1/3. Hence, from (A121), it holds that |PX^_PX^|μn. By the definitions of X^_, X_ and Y_, we have PX^|X=PX^_|X_ and PY|X=PY_|X_=V. The random variable U is chosen over the same alphabet as U_ and such that PU|X^=PU_|X^_.

Since PY(y)>0 for all yY, letting n and μn0 and the uniform continuity of the involved information-theoretic quantities yields the following upper bound on the optimal error exponent:

θϵ*(R,L)I(U;Y), (A122)

subject to the rate constraint:

RI(U;X^), (A123)

and the privacy constraint:

LI(X;X^). (A124)

This concludes the proof of converse.

Appendix C. Proof of Converse of Proposition 1

We simplify Theorem 2 for the proposed binary setup. As discussed in Section 4.3, from the fact that |X^|=2 and the symmetry of the source X on its alphabet, without loss of optimality, we can choose PX^|X to be a BSC. First, consider the rate constraint:

(A125)RI(U;X^)(A126)=H(X^)H(X^|U)(A127)=1H(X^|U),

which can be equivalently written as the following:

H(X^|U)1R. (A128)

Also, the privacy criterion can be simplified as follows:

(A129)LI(X^;X)(A130)=H(X^)H(X^|X)(A131)=1H(X^|X)(A132)=1H(Z^),

which can be equivalently written as

H(Z^)1L. (A133)

Now, consider the error exponent θ as follows:

(A134)θI(U;Y)(A135)=H(Y)H(Y|U)(A136)=H(Y)H(XN|U)(A137)=H(Y)H(X^Z^N|U)(A138)H(Y)hbhb1(H(X^|U))hb1(1L)q(A139)H(Y)hbhb1(1R)hb1(1L)q,

where (A138) follows from Mrs. Gerber’s lemma [39] (Theorem 1) and the fact that (Z^,N) is independent of U and also from (A133); (A139) follows from (A128). This concludes the proof of the proposition.

Appendix D. Euclidean Approximation of Testing agianst Independence

We analyze the Euclidean approximation with the parameters, W, ψu(x^), ϕx^(x) and Λu(x) defined in Section 4.4. Notice that since UX^XY forms a Markov chain, it holds that, for any uU,

PY|U=u=WPX|U=u. (A140)

Now, consider the following chain of equalities for any xX:

(A141)PX|U(x|u)=x^X^PXX^|U(x,x^|u)(A142)=x^X^PX^|U(x^|u)PX|X^,U(x|x^,u)(A143)=x^X^PX^|U(x^|u)PX|X^(x|x^)(A144)=x^X^PX^(x^)+ψu(x^)PX(x)+ϕx^(x)(A145)=PX(x)+x^X^ψu(x^)ϕx^(x)+x^X^PX^(x^)ϕx^(x)+PX(x)x^X^ψu(x^)(A146)=PX(x)+x^X^ψu(x^)ϕx^(x),

where (A143)—(A146) are justified in the following:

  • (A143) follows from the Markov chain UX^X where given X^, U and X are independent;

  • (A144) follows from (30) and (36);

  • (A146) follows from (31) and also from (36) which yields the following:
    x^X^PX^(x^)·ϕx^(x)=0. (A147)

With the definition of Λu(x) in (42), we can write

PX|U(x|u)=PX(x)+Λu(x),xX,uU. (A148)

Thus, we get

(A149)PY|U=u=WPX+WΛu(A150)=PY+WΛu.

Applying the χ2-approximation and using (A150), we can rewrite I(U;Y) as follows:

I(U;Y)12logeuUPU(u)PY1WΛu2 (A151)

The above approximation with the definition of the vector Λu in (43) yields the optimization problem in (45).

Appendix E. Proof of Proposition 2

Achievability: We specialize the achievable scheme of Theorem 2 to the proposed Gaussian setup. We choose the auxiliary random variables as in (63) and (65). Notice that from the Markov chain UX^XY and also the Gaussian choice of X^ in (63) which was discussed in Section 4.5, we can write Y=ρX^+F where FN0,1ρ2·(122L) is independent of X^. These choices of auxiliary random variables lead to the following rate constraint:

R12log122Lβ2, (A152)

which can be equivalently written as:

22R·122Lβ2. (A153)

The optimal error exponent is also lower bounded as follows

θϵ*(R,L)12log11ρ2·122Lβ2. (A154)

Combining (A153) and (A154) gives the lower bound on the error exponent in (64).

Converse: Consider the following upper bound on the optimal error exponent in Theorem 2:

(A155)θϵ*(R,L)I(U;Y)(A156)=h(Y)h(Y|U)(A157)=12log2πeh(Y|U)(A158)=12log2πehρX^+F|U(A159)12log2πe12log22hρX^|U+2πe1ρ2·(122L)(A160)12log2πe12logρ222hX^|U+2πe1ρ2·(122L),

where (A159) follows from the entropy power inequality (EPI) [34] (Chapter 2). Now, consider the rate constraint as follows:

(A161)RI(X^;U)(A162)=h(X^)h(X^|U)(A163)=12log2πe122Lh(X^|U),

which is equivalent to

22h(X^|U)2πe·22R·122L. (A164)

Considering (A160) with (A164) yields the following upper bound on the error exponent:

(A165)θϵ*(R,L)12log2πe12log2πeρ222R122L+2πe1ρ2(122L)(A166)=12log11ρ2(122R)(122L).

This concludes the proof of the proposition.

Author Contributions

Investigation, A.G.; Supervision, S.S. and V.Y.F.T.; Writing—original draft, S.B.A.

Funding

This research was partially funded by grants R-263-000-C83-112 and R-263-000-C54-114.

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Ahlswede R., Csiszàr I. Hypothesis testing with communication constraints. IEEE Trans. Inf. Theory. 1986;32:533–542. doi: 10.1109/TIT.1986.1057194. [DOI] [Google Scholar]
  • 2.Zhao W., Lai L. Distributed testing against independence with multiple terminals; Proceedings of the 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton); Monticello, IL, USA. 30 September–3 October 2014; pp. 1246–1251. [Google Scholar]
  • 3.Xiang Y., Kim Y.H. Interactive hypothesis testing against independence; Proceedings of the 2013 IEEE International Symposium on Information Theory; Istanbul, Turkey. 7–12 July 2013; pp. 2840–2844. [Google Scholar]
  • 4.Tian C., Chen J. Successive refinement for hypothesis testing and lossless one-helper problem. IEEE Trans. Inf. Theory. 2008;54:4666–4681. doi: 10.1109/TIT.2008.928951. [DOI] [Google Scholar]
  • 5.Rahman M.S., Wagner A.B. On the optimality of binning for distributed hypothesis testing. IEEE Trans. Inf. Theory. 2012;58:6282–6303. doi: 10.1109/TIT.2012.2206793. [DOI] [Google Scholar]
  • 6.Sreekuma S., Gündüz D. Distributed Hypothesis Testing Over Noisy Channels; Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017. [Google Scholar]
  • 7.Salehkalaibar S., Wigger M., Timo R. On hypothesis testing against independence with multiple decision centers. arXiv. 2017 doi: 10.1109/TCOMM.2018.2798659.1708.03941 [DOI] [Google Scholar]
  • 8.Salehkalaibar S., Wigger M., Wang L. Hypothesis Testing In Multi-Hop Networks. arXiv. 20171708.05198 [Google Scholar]
  • 9.Mhanna M., Piantanida P. On secure distributed hypothesis testing; Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT); Hong Kong, China. 14–19 June 2015; pp. 1605–1609. [Google Scholar]
  • 10.Han T.S. Hypothesis testing with multiterminal data compression. IEEE Trans. Inf. Theory. 1987;33:759–772. doi: 10.1109/TIT.1987.1057383. [DOI] [Google Scholar]
  • 11.Shimokawa H., Han T., Amari S.I. Error bound for hypothesis testing with data compression. IEEE Trans. Inf. Theory. 1994;32:533–542. [Google Scholar]
  • 12.Ugur Y., Aguerri I.E., Zaidi A. Vector Gaussian CEO Problem Under Logarithmic Loss and Applications. arXiv. 20181811.03933 [Google Scholar]
  • 13.Zaidi A., Aguerri I.E., Caire G., Shamai S. Uplink oblivious cloud radio access networks: An information theoretic overview; Proceedings of the 2018 Information Theory and Applications Workshop (ITA); San Diego, CA, USA. 11–16 February 2018. [Google Scholar]
  • 14.Aguerri I.E., Zaidi A., Caire G., Shamai S. On the capacity of cloud radio access networks with oblivious relaying; Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017. [Google Scholar]
  • 15.Aguerri I.E., Zaidi A. Distributed information bottleneck method for discrete and Gaussian sources; Proceedings of the 2018 International Zurich Seminar on Information and Communication (IZS); Zurich, Switzerland. 21–23 February 2018. [Google Scholar]
  • 16.Aguerri I.E., Zaidi A. Distributed variational representation learning. arXiv. 2018 doi: 10.1109/TPAMI.2019.2928806.1807.04193 [DOI] [PubMed] [Google Scholar]
  • 17.Evfimievski A.V., Gehrke J., Srikant R. Limiting privacy breaches in privacy preserving data mining; Proceedings of the Twenty-Second Symposium on Principles of Database Systems; San Diego, CA, USA. 9–11 June 2003; pp. 211–222. [Google Scholar]
  • 18.Smith G. Proceedings of the 12th International Conference on Foundations of Software Science and Computational Structures: Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009. Springer; Berlin/Heidelberg, Germany: 2009. On the Foundations of Quantitative Information Flow; pp. 288–302. [Google Scholar]
  • 19.Sankar L., Rajagopalan S.R., Poor H.V. Utility-Privacy Tradeoffs in Databases: An Information-Theoretic Approach. IEEE Trans. Inf. Forensics Secur. 2013;8:838–852. doi: 10.1109/TIFS.2013.2253320. [DOI] [Google Scholar]
  • 20.Liao J., Sankar L., Tan V.Y.F., Calmon F. Hypothesis Testing Under Mutual Information Privacy Constraints in the High Privacy Regime. IEEE Trans. Inf. Forensics Secur. 2018;13:1058–1071. doi: 10.1109/TIFS.2017.2779108. [DOI] [Google Scholar]
  • 21.Barthe G., Köpf B. Information-theoretic bounds for differentially private mechanisms; Proceedings of the 2011 IEEE 24th Computer Security Foundations Symposium; Cernay-la-Ville, France. 27–29 June 2011; pp. 191–204. [Google Scholar]
  • 22.Issa I., Wagner A.B. Operational definitions for some common information leakage metrics; Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017; pp. 769–773. [Google Scholar]
  • 23.Liao J., Sankar L., Calmon F., Tan V.Y.F. Hypothesis testing under maximal leakage privacy constraints; Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT); Aachen, Germany. 25–30 June 2017; pp. 779–783. [Google Scholar]
  • 24.Wagner I., Eckhoff D. Technical Privacy Metrics: A Systematic Survey. ACM Comput. Surv. (CSUR) 2018;51 doi: 10.1145/3168389. to appear. [DOI] [Google Scholar]
  • 25.Dwork C. Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, Part II (ICALP 2006) Volume 4052. Springer; Venice, Italy: 2006. Differential Privacy; pp. 1–12. [Google Scholar]
  • 26.Dwork C., Kenthapadi K., McSherry F., Mironov I., Naor M. Advances in Cryptology (EUROCRYPT 2006) Volume 4004. Springer; Saint Petersburg, Russia: 2006. Our Data, Ourselves: Privacy Via Distributed Noise Generation; pp. 486–503. [Google Scholar]
  • 27.Dwork C. Differential Privacy: A Survey of Results. Volume 4978 Springer; Heidelberg, Germany: 2008. Chapter Theory and Applications of Models of Computation; TAMC 2008; Lecture Notes in Computer Science. [Google Scholar]
  • 28.Wasserman L., Zhou S. A statistical framework for differential privacy. J. Am. Stat. Assoc. 2010;105:375–389. doi: 10.1198/jasa.2009.tm08651. [DOI] [Google Scholar]
  • 29.Sreekumar A.C., Gunduz D. Distributed hypothesis testing with a privacy constraint. arXiv. 20181806.02015 [Google Scholar]
  • 30.Borade S., Zheng L. Euclidean Information Theory; Proceedings of the 2008 IEEE International Zurich Seminar on Communications; Zurich, Switzerland. 12–14 March 2008; pp. 14–17. [Google Scholar]
  • 31.Huang S., Suh C., Zheng L. Euclidean information theory of networks. IEEE Trans. Inf. Theory. 2015;61:6795–6814. doi: 10.1109/TIT.2015.2484066. [DOI] [Google Scholar]
  • 32.Viterbi A.J., Omura J.K. Principles of Digital Communication and Coding. McGraw-Hill; New York, NY, USA: 1979. [Google Scholar]
  • 33.Weinberger N., Merhav N. Optimum tradeoffs between the error exponent and the excess-rate exponent of variable rate Slepian-Wolf coding. IEEE Trans. Inf. Theory. 2015;61:2165–2190. doi: 10.1109/TIT.2015.2405537. [DOI] [Google Scholar]
  • 34.El Gamal A., Kim Y.H. Network Information Theory. Cambridge University Press; Cambridge, UK: 2011. [Google Scholar]
  • 35.Shalaby H.M.H., Papamarcou A. Multiterminal detection with zero-rate data compression. IEEE Trans. Inf. Theory. 1992;38:254–267. doi: 10.1109/18.119685. [DOI] [Google Scholar]
  • 36.Cover T.M., Thomas J.A. Elements of Information Theory. 2nd ed. Wiley; Hoboken, NJ, USA: 2006. [Google Scholar]
  • 37.Watanabe S. Neyman-Pearson Test for Zero-Rate Multiterminal Hypothesis Testing. IEEE Trans. Inf. Theory. 2018;64:4923–4939. doi: 10.1109/TIT.2017.2778252. [DOI] [Google Scholar]
  • 38.Csiszàr I., Körner J. Information theory: Coding Theorems for Discrete Memoryless Systems. Academic Press; New York, NY, USA: 1982. [Google Scholar]
  • 39.Wyner A.D., Ziv J. A theorem on the entropy of certain binary sequences and applications (Part I) IEEE Trans. Inf. Theory. 1973;19:769–772. doi: 10.1109/TIT.1973.1055107. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES