Skip to main content
Entropy logoLink to Entropy
. 2020 Jun 16;22(6):665. doi: 10.3390/e22060665

Privacy-Aware Distributed Hypothesis Testing

Sreejith Sreekumar 1,*, Asaf Cohen 2, Deniz Gündüz 3
PMCID: PMC7517198  PMID: 33286437

Abstract

A distributed binary hypothesis testing (HT) problem involving two parties, a remote observer and a detector, is studied. The remote observer has access to a discrete memoryless source, and communicates its observations to the detector via a rate-limited noiseless channel. The detector observes another discrete memoryless source, and performs a binary hypothesis test on the joint distribution of its own observations with those of the observer. While the goal of the observer is to maximize the type II error exponent of the test for a given type I error probability constraint, it also wants to keep a private part of its observations as oblivious to the detector as possible. Considering both equivocation and average distortion under a causal disclosure assumption as possible measures of privacy, the trade-off between the communication rate from the observer to the detector, the type II error exponent, and privacy is studied. For the general HT problem, we establish single-letter inner bounds on both the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. Subsequently, single-letter characterizations for both trade-offs are obtained (i) for testing against conditional independence of the observer’s observations from those of the detector, given some additional side information at the detector; and (ii) when the communication rate constraint over the channel is zero. Finally, we show by providing a counter-example where the strong converse which holds for distributed HT without a privacy constraint does not hold when a privacy constraint is imposed. This implies that in general, the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs are not independent of the type I error probability constraint.

Keywords: Hypothesis testing, privacy, testing against conditional independence, error exponent, equivocation, distortion, causal disclosure

1. Introduction

Data inference and privacy are often contradicting objectives. In many multi-agent system, each agent/user reveals information about its data to a remote service, application or authority, which in turn, provides certain utility to the users based on their data. Many emerging networked systems can be thought of in this context, from social networks to smart grids and communication networks. While obtaining the promised utility is the main goal of the users, privacy of data that is shared is becoming increasingly important. Thus, it is critical that the users ensure a desired level of privacy for the sensitive information revealed, while maximizing the utility subject to this constraint.

In many distributed learning or distributed decision-making applications, typically the goal is to learn the joint probability distribution of data available at different locations. In some cases, there may be prior knowledge about the joint distribution, for example, that it belongs to a certain set of known probability distributions. In such a scenario, the nodes communicate their observations to the detector, which then applies hypothesis testing (HT) on the underlying joint distribution of the data based on its own observations and those received from other nodes. However, with the efficient data mining and machine learning algorithms available today, the detector can illegitimately infer some unintended private information from the data provided to it exclusively for HT purposes. Such threats are becoming increasingly imminent as large amounts of seemingly irrelevant yet sensitive data are collected from users, such as in medical research [1], social networks [2], online shopping [3] and smart grids [4]. Therefore, there is an inherent trade-off between the utility acquired by sharing data and the associated privacy leakage.

There are several practical scenarios where the above-mentioned trade-off arises. For example, consider the issue of consumer privacy in the context of online shopping. A consumer would like to share some information about his/her shopping behavior, e.g., shopping history and preferences, with the shopping portal to get better deals and recommendations on relevant products. The shopping portal would like to determine whether the consumer belongs to its target age group (e.g., below 30 years old) before sending special offers to this customer. Assuming that the shopping patterns of the users within and outside the target age groups are independent, the shopping portal performs a hypothesis test to check if the consumer’s shared data is correlated with the data of its own customers. If the consumer is indeed within the target age group, the shopping portal would like to gather more information about this potential customer, particular interests, more accurate age estimation, etc.; while the user is reluctant to provide any further information. Yet another relevant example is the issue of user privacy in the context of wearable Internet of Things (IoT) devices such as smart watches and fitness trackers, which collect information on routine daily activities, and often have a third-party cloud interface.

In this paper, we study distributed HT (DHT) with a privacy constraint, in which an observer communicates its observations to a detector over a noiseless rate-limited channel of rate R nats per observed sample. Using the data received from the observer, the detector performs binary HT on the joint distribution of its own observations and those of the observer. The performance of the HT is measured by the asymptotic exponential rate of decay of the type II error probability, known as the type II error exponent (or error exponent henceforth), for a given constraint on the type I error probability (definitions will be given below). While the goal is to maximize the performance of the HT, the observer also wants to maintain a certain level of privacy against the detector for some latent private data that is correlated with its observations. We are interested in characterizing the trade-off between the communication rate from the observer to the detector over the channel, error exponent achieved by the HT and the amount of information leakage of private data. A special case of HT known as testing against conditional independence (TACI) will be of particular interest. In TACI, the detector tests whether its own observations are independent of those at the observer, conditioned on additional side information available at the detector.

1.1. Background

Distributed HT without any privacy constraint has been studied extensively from an information- theoretic perspective in the past, although many open problems remain. The fundamental results for this problem are first established in [5], which includes a single-letter lower bound on the optimal error exponent and a strong converse result which states that the optimal error exponent is independent of the constraint on the type I error probability. Exact single-letter characterization of the optimal error exponent for the testing against independence (TAI) problem, i.e., TACI with no side information at the detector, is also obtained. The lower bound established in [5] is further improved in [6,7]. Strong converse is studied in the context of complete data compression and zero-rate compression in [6,8], respectively, where in the former, the observer communicates to the detector using a message set of size two, while in the latter using a message set whose size grows sub-exponentially with the number of observed samples. The TAI problem with multiple observers remains open (similar to several other distributed compression problems when a non-trivial fidelity criterion is involved); however, the optimal error exponent is obtained in [9] when the sources observed at different observers follow a certain Markov relation. The scenario in which, in addition to HT, the detector is also interested in obtaining a reconstruction of the observer’s source, is studied in [10]. The authors characterize the trade-off between the achievable error exponent and the average distortion between the observer’s observations and the detector’s reconstruction. The TACI is first studied in [11], where the optimality of a random binning-based encoding scheme is shown. The optimal error exponent for TACI over a noisy communication channel is established in [12]. Extension of this work to general HT over a noisy channel is considered in [13], where lower bounds on the optimal error exponent are obtained by using a separation-based scheme and also using hybrid coding for the communication between the observer and the detector. The TACI with a single observer and multiple detectors is studied in [14], where each detector tests for the conditional independence of its own observations from those of the observer. The general HT version of this problem over a noisy broadcast channel and DHT over a multiple access channel is explored in [15]. While all the above works consider the asymmetric objective of maximizing the error exponent under a constraint on the type I error probability, the trade-off between the exponential rate of decay of both the type I and type II error probabilities are considered in [16,17,18].

Data privacy has been a hot topic of research in the past decade, spanning across multiple disciplines in computer and computational sciences. Several practical schemes have been proposed that deal with the protection or violation of data privacy in different contexts, e.g., see [19,20,21,22,23,24]. More relevant for our work, HT under mutual information and maximal leakage privacy constraints have been studied in [25,26], respectively, where the observer uses a memoryless privacy mechanism to convey a noisy version of its observed data to the detector. The detector performs HT on the probability distribution of the observer’s data, and the optimal privacy mechanism that maximizes the error exponent while satisfying the privacy constraint is analyzed. Recently, a distributed version of this problem has been studied in [27], where the observer applies a privacy mechanism to its observed data prior to further coding for compression, and the goal at the detector is to perform a HT on the joint distribution of its own observations with those of the observer. In contrast with [25,26,27], we study DHT with a privacy constraint, but without considering a separate privacy mechanism at the observer. In Section 2, we will further discuss the differences between the system model considered here and that of [27].

It is important to note here that the data privacy problem is fundamentally different from that of data security against an eavesdropper or an adversary. In data security, sensitive data is to be protected against an external malicious agent distinct from the legitimate parties in the system. The techniques for guaranteeing data security usually involve either cryptographic methods in which the legitimate parties are assumed to have additional resources unavailable to the adversary (e.g., a shared private key) or the availability of better communication channel conditions (e.g., using wiretap codes). However, in data privacy problems, the sensitive data is to be protected from the same legitimate party that receives the messages and provides the utility; and hence, the above-mentioned techniques for guaranteeing data security are not applicable. Another model frequently used in the context of information-theoretic security assumes the availability of different side information at the legitimate receiver and the eavesdropper [28,29]. A DHT problem with security constraints formulated along these lines is studied in [30], where the authors propose an inner bound on the rate-error exponent-equivocation trade-off. While our model is related to that in [30] when the side information at the detector and eavesdropper coincide, there are some important differences which will be highlighted in Section 2.3.

Many different privacy measures have been considered in the literature to quantify the amount of private information leakage, such as k-anonymity [31], differential privacy (DP) [32], mutual information leakage [33,34,35], maximal leakage [36], and total variation distance [37] to count a few; see [38] for a detailed survey. Among these, mutual information between the private and revealed information (or, equivalently, the equivocation of private information given the revealed information) is perhaps the most commonly used measure in the information-theoretic studies of privacy. It is well known that a necessary and sufficient condition to guarantee statistical independence between two random variables is to have zero mutual information between them. Furthermore, the average information leakage measured using an arbitrary privacy measure is upper bounded by a constant multiplicative factor of that measured by mutual information [34]. It is also shown in [33] that a differentially private scheme is not necessarily private when the information leakage is measured by mutual information. This is done by constructing an example that is differentially private, yet the mutual information leakage is arbitrarily high. Mutual information-based measures have also been used in cryptographic security studies. For example, the notion of semantic security defined in [39] is shown to be equivalent to a measure based on mutual information in [40].

A rate-distortion approach to privacy is first explored by Yamamoto in [41] for a rate-constrained noiseless channel, where in addition to a distortion constraint for legitimate data, a minimum distortion requirement is enforced for the private part. Recently, there have been several works that have used distortion as a security or privacy metric in several different contexts, such as side-information privacy in discriminatory lossy source coding [42] and rate-distortion theory of secrecy systems [43,44]. More specifically, in [43], the distortion-based security measure is analyzed under a causal disclosure assumption, in which the data samples to be protected are causally revealed to the eavesdropper (excluding the current sample), yet the average distortion over the entire block has to satisfy a desired lower bound. This assumption ensures that distortion as a secrecy measure is more robust (see ([43], Section I-A)), and could in practice model scenarios in which the sensitive data to be protected is eventually available to the eavesdropper with some delay, but the protection of the current data sample is important. In this paper, we will consider both equivocation and average distortion under a causal disclosure assumption as measures of privacy. In [45], error exponent of a HT adversary is considered to be a privacy measure. This can be considered to be the opposite setting to ours, in the sense that while the goal here is to increase the error exponent under a privacy leakage constraint, the goal in [45] is to reduce the error exponent under a constraint on possible transformations that can be applied on the data.

It is instructive to compare the privacy measures considered in this paper with DP. Towards this, note that average distortion and equivocation (see Definitions 1 and 2) are “average case” privacy measures, while DP is a “worst case” measure that focuses on the statistical indistinguishability of neighboring datasets that differ in just one entry. Considering this aspect, it may appear that these privacy measures are unrelated. However, as shown in [46], there is an interesting connection between them. More specifically, the maximum conditional mutual information leakage between the revealed data Y and an entry in the dataset Xi given all the other n1 entries Xi=Xn{Xi}, i.e., I(Y;Xi|Xi), is sandwiched between the so-called ϵ- DP and (ϵ,δ)-DP in terms of the strength of the privacy measure, where the maximization is over all distributions PXn on Xn and entries i[1:n] ([46], Theorem 1). This implies that as a privacy measure, equivocation (equivalent to mutual information leakage) is weaker than ϵ- DP, and stronger than (ϵ,δ)-DP, at least for some probability distributions on the data. On the other hand, equivocation and average distortion are relatively well-behaved privacy measures compared to DP, and often result in clean and exact computable characterizations of the optimal trade-off for the problem at hand. Moreover, as already shown in [39,40,47,48], the trade-off resulting from “average” constraints turns out to be the same as that with more stricter constraints in many interesting cases. Hence, it is of interest to consider such average case privacy measures as a starting point for further investigation with stricter measures.

DP has been used extensively in privacy studies including those that involve learning and HT [49,50,51,52,53,54,55,56,57,58,59]. More relevant to the distributed HT problem at hand is the local differentially private model employed in [49,50,51,56], in which, depending on the privacy requirement, a certain amount of random noise is injected into the user’s data before further processing, while the utility is maximized subject to this constraint. Nevertheless, there are key differences between these models and ours. For example, in [49], the goal is to learn from differentially private “examples”, the underlying “concept” (model that maps examples to “labels”) such that the error probability in predicting the label for future examples is minimized, irrespective of the statistics of the examples. Hence, the utility in [49] is to learn an unknown model accurately, whereas our objective is to test between two known probability distributions. Furthermore, in our setting (unlike [49,50,51,56]), there is an additional requirement to satisfy in terms of the communication rate. These differences perhaps also make DP less suitable as a privacy measure in our model relative to equivocation and average distortion. On one hand, imposing a DP measure in our setting may be overly restrictive since there are only two probability distributions involved and DP is tailored for situations where the statistics of the underlying data is unknown. On the other hand, DP is also more unwieldy to analyze under a rate constraint compared to mutual information or average distortion.

The amount of private information leakage that can be tolerated depends on the specific application at hand. While it may be possible to tolerate a moderate amount of information leakage in applications like online shopping or social networks, it may no longer be the case in matters related to information sharing among government agencies or corporations. While it is obvious that maximum privacy can be attained by revealing no information, this typically comes at the cost of zero utility. On the other hand, maximum utility can be achieved by revealing all the information, but at the cost of minimum privacy. Characterizing the optimal trade-off between the utility and the minimum privacy leakage between these two extremes is a fundamental and challenging research problem.

1.2. Main Contributions

The main contributions of this work are as follows.

  1. In Section 3, Theorem 1 (resp. Theorem 2), we establish a single-letter inner bound on the rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off for DHT with a privacy constraint. The distortion and equivocation privacy constraints we consider, which is given in (6) and (7), respectively, are slightly stronger than what is usually considered in the literature (stated in (8) and (9), respectively).

  2. Exact characterizations are obtained for some important special cases in Section 4. More specifically, a single-letter characterization of the optimal rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off is established for:

    • (a)

      TACI with a privacy constraint (for vanishing type I error probability constraint) in Section 4.1, Proposition 1 (resp. Proposition 2),

    • (b)

      DHT with a privacy constraint for zero-rate compression in Section 4.2, Proposition 4 (resp. Proposition 3).

    Since the optimal trade-offs in Propositions 3 and 4 are independent of the constraint on the type I error probability, they are strong converse results in the context of HT.

  3. Finally, in Section 5, we provide a counter-example showing that for a positive rate R>0, the strong converse result does not hold in general for TAI with a privacy constraint.

1.3. Organization

The organization of the paper is as follows. Basic notations are introduced in Section 2.1. The problem formulation and associated definitions are given in Section 2.2. Main results are presented in Section 3 to Section 5. The proofs of the results are presented either in the Appendix or immediately after the statement of the result. Finally, Section 6 concludes the paper with some open problems for future research.

2. Preliminaries

2.1. Notations

N, R and R0 stand for the set of natural numbers, real numbers and non-negative real numbers, respectively. For aR0, [a]:={iN,ia} and for aR, a+:=max{0,a} (:= represents equality by definition). Calligraphic letters, e.g., A, denotes sets, while |A| and Ac denotes its cardinality and complement, respectively. 1(·) denotes the indicator function, while O(·), o(·) and Ω(·) stands for the standard asymptotic notations of Big-O, Little-O and Big-Ω, respectively. For a real sequence {an}nN and bR, an(n)b represents limnan=b. Similar notations apply for asymptotic inequalities, e.g., an(n)b, means that limnanb. Throughout this paper, the base of the logarithms is taken to be e, and whenever the range of the summation is not specified, it means summation over the entire support, e.g., u denotes uU.

All the random variables (r.v.’s) considered in this paper are discrete with finite support unless specified otherwise. We denote r.v.’s, their realizations and support by upper case, lower case and calligraphic letters (e.g., X, x and X), respectively. The joint probability distribution of r.v.’s X and Y is denoted by PXY, while their marginals are denoted by PX and PY. The set of all probability distributions with support X and X×Y are represented by P(X) and P(X×Y), respectively. For j,iN, the random vector (Xi,,Xj), ji, is denoted by Xij, while Xj stands for (X1,,Xj). Similar notation holds for the vector of realizations. XYZ denotes a Markov chain relation between the r.v.’s X, Y and Z. PP(E) denotes the probability of event E with respect to the probability measure induced by distribution P, and EP[·] denotes the corresponding expectation. The subscript P is omitted when the distribution involved is clear from the context. For two probability distributions P and Q defined on a common support, P<<Q denotes that P is absolutely continuous with respect to Q.

Following the notation in [60], for PXP(X) and δ0, the PX-typical set is

T[PX]δn:=xnXn:PX(x)1ni=1n1(xi=x)δ,xX,

and the PX-type class (set of sequences of type or empirical distribution PX) is TPXn:=T[PX]0n. The set of all possible types of sequences of length n over an alphabet Xn and the set of types in T[PX]δn are denoted by Pn(X) and PnT[PX]δn, respectively. Similar notations apply for pairs and larger combinations of r.v.’s, e.g., T[PXY]δn, TPXYn, Pn(X×Y) and PnT[PXY]δn. The conditional PY|X type class of a sequence xnXn is

TPY|Xn(xn):=yn:(xn,yn)TPXYn. (1)

The standard information-theoretic quantities like Kullback–Leibler (KL) divergence between distributions PX and QX, the entropy of X with distribution PX, the conditional entropy of X given Y and the mutual information between X and Y with joint distribution PXY, are denoted by D(PX||QX), HPX(X), HPXY(X|Y) and IPXY(X;Y), respectively. When the distribution of the r.v.’s involved are clear from the context, the last three quantities are denoted simply by H(X), H(X|Y) and I(X;Y), respectively. Given realizations Xn=xn and Yn=yn, He(xn|yn) denotes the conditional empirical entropy given by

He(yn|xn):=HPX˜Y˜(Y˜|X˜), (2)

where PX˜Y˜ denotes the joint type of (xn,yn). Finally, the total variation between probability distributions PX and QX defined on the same support X is

||PXQX||:=12xX|PX(x)QX(x)|.

2.2. Problem Formulation

Consider the HT setup illustrated in Figure 1, where (Un,Vn,Sn) denote n independent and identically distributed (i.i.d.) copies of triplet of r.v.’s (U,V,S). The observer observes Un and sends the message index M to the detector over an error-free channel, where Mfn(·|Un) and fn:UnP(M), M=[enR]. Given its own observation Vn, the detector performs a HT on the joint distribution of Un and Vn with null hypothesis

H0:(Un,Vn)i=1nPUV,

and alternate hypothesis

H1:(Un,Vn)i=1nQUV.

Figure 1.

Figure 1

DHT with a privacy constraint.

Let H and H^ denote the r.v.’s corresponding to the true hypothesis and the output of the HT, respectively, with support H=H^={0,1}, where 0 denotes the null hypothesis and 1 the alternate hypothesis. Let gn:M×VnP(H^) denote the decision rule at the detector, which outputs H^gn(M,Vn). Then, the type I and type II error probabilities achieved by a fn,gn pair are given by

αnfn,gn:=P(H^=1|H=0)=PH^(1),

and

βnfn,gn:=P(H^=0|H=1)=QH^(0),

respectively, where

PH^(1)=un,m,vni=1nPUV(ui,vi)fn(m|un)gn(1|m,vn),

and

QH^(0)=un,m,vni=1nQUV(ui,vi)fn(m|un)gn(0|m,vn).

Let PUnVnSnMH^ and QUnVnSnMH^ denote the joint distribution of (Un,Vn,Sn,M,H^) under the null and alternate hypotheses, respectively. For a given type I error probability constraint ϵ, define the minimum type II error probability over all possible detectors as

β¯nfn,ϵ:=infgnβnfn,gn,such thatαnfn,gnϵ. (3)

The performance of HT is measured by the error exponent achieved by the test for a given constraint ϵ on the type I error probability, i.e., lim infn1nlogβ¯n(fn,ϵ). Although the goal of the detector is to maximize the error exponent achieved for the HT, it is also curious about the latent r.v. Sn that is correlated with Un. Sn is referred to as the private part of Un, which is distributed i.i.d. according to the joint distribution PSUV and QSUV under the null and alternate hypothesis, respectively. It is desired to keep the private part as concealed as possible from the detector. We consider two measures of privacy for Sn at the detector. The first is the equivocation defined as H(Sn|M,Vn). The second one is the average distortion between Sn and its reconstruction S^n at the detector, measured according to an arbitrary bounded additive distortion metric d:S×S^[0,Dm] with multi-letter distortion defined as

d(sn,s^n):=i=1nd(si,s^i). (4)

We will assume the causal disclosure assumption, i.e., S^i is a function of Si1 in addition to (M,Vn). The goal is to ensure that the error exponent for HT is maximized, while satisfying the constraints on the type I error probability ϵ and the privacy of Sn. In the sequel, we study the trade-off between the rate, error exponent (henceforth also referred to simply as the error exponent) and privacy achieved in the above setting. Before delving into that, a few definitions are in order.

Definition 1.

For a given type I error probability constraint ϵ, a rate-error exponent-distortion tuple (R,κ,Δ0,Δ1) is achievable, if there exists a sequence of encoding and decoding functions fn:UnP(M), and gn:M×VnP(H^) such that

lim infnlogβ¯n(fn,ϵ)nκ, (5)

and for any γ>0, there exists an n0N such that

infgi,n(r)i=1nEdSn,S^n|H=jnΔjγ,nn0,j=0,1, (6)

where S^igi,n(r)(·|M,Vn,Si1), and gi,n(r):[enR]×Vn×Si1P(S^i) denotes an arbitrary stochastic reconstruction map at the detector. The rate-error exponent-distortion region Rd(ϵ) is the closure of the set of all such achievable (R,κ,Δ0,Δ1) tuples for a given ϵ.

Definition 2.

For a given type I error probability constraint ϵ, a rate-error exponent-equivocation (It is well known that equivocation as a privacy measure is a special case of average distortion under the causal disclosure assumption and log-loss distortion metric [43]. However, we provide a separate definition of the rate-error exponent-equivocation region for completeness.) (R,κ,Λ0,Λ1) tuple is achievable, if there exists a sequence of encoding and decoding functions fn:UnP(M) and gn:[enR]×VnP(H^) such that (5) is satisfied, and for any γ>0, there exists a n0N such that

H(Sn|M,Vn,H=i)nΛiγ,nn0,i{0,1}. (7)

The rate-error exponent-equivocation region Re(ϵ) is the closure of the set of all such achievable (R,κ,Λ0,Λ1) tuples for a given ϵ.

Please note that the privacy measures considered in (6) and (7) are stronger than

lim infninfgi,n(r)i=1nE1ndSn,S^n|H=iΔi,i=0,1, (8)
and lim infn1nH(Sn|M,Vn,H=i)Λi,i=0,1, (9)

respectively. To see this for the equivocation privacy measure, note that if H(Sn|M,Vn,H=i)=nΛi*na, i=0,1, for some a(0,1), then an equivocation pair (Λ0*,Λ1*) is achievable under the constraint given in (9), while it is not achievable under the constraint given in (7).

2.3. Relation to Previous Work

Before stating our results, we briefly highlight the differences between our system model and the ones studied in [27,30]. In [27], the observer applies a privacy mechanism to the data before releasing it to the transmitter, which performs further encoding prior to transmission to the detector. More specifically, the observer checks if UnT[PU]δn and if successful, sends the output of a memoryless privacy mechanism applied to Un, to the transmitter. Otherwise, it outputs a n-length zero-sequence. The privacy mechanism plays the role of randomizing the data (or adding noise) to achieve the desired privacy. Such randomized privacy mechanisms are popular in privacy studies, and have been used in [25,26,61]. In our model, the tasks of coding for privacy and compression are done jointly by using all the available data samples Un. Also, while we consider the equivocation (and average distortion) between the revealed information and the private part as the privacy measure, in [27], the mutual information between the observer’s observations and the output of the memoryless mechanism is the privacy measure. As a result of these differences, there exist some points in the rate-error exponent-privacy trade-off that are achievable in our model, but not in [27]. For instance, a perfect privacy condition Λ0=0 for testing against independence in ([27], Theorem 2) would imply that the error exponent is also zero, since the output of the memoryless mechanism has to be independent of the observer’s observations (under both hypotheses). However, as we later show in Example 2, a positive error exponent is achievable while guaranteeing perfect privacy in our model.

On the other hand, the difference between our model and [30] arises from the difference in the privacy constraint as well as the privacy measure. Specifically, the goal in [30] is to keep Un private from an illegitimate eavesdropper, while the objective here is to keep a r.v. Sn that is correlated with Un private from the detector. Also, we consider the more general average distortion (under causal disclosure) as a privacy measure, in addition to equivocation in [30]. Moreover, as already noted, the equivocation privacy constraint in (7) is more stringent than (9) that is considered in [30]. To satisfy the distortion requirement or the stronger equivocation privacy constraint in (7), we require that the a posteriori probability distribution of Sn given the observations (M,Vn) at the detector is close in some sense to a desired “target" memoryless distribution. To achieve this, we use a stochastic encoding scheme to induce the necessary randomness for Sn at the detector, which to the best of our knowledge has not been considered previously in the context of DHT. Consequently, the analysis of the type I and type II error probabilities and privacy achieved are novel. Another subtle yet important difference is that the marginal distributions of Un and the side information at the eavesdropper are assumed to be the same under the null and alternate hypotheses in [30], which is not the case here. This necessitates separate analysis for the privacy achieved under the two hypotheses.

Next, we state some supporting results that will be useful later for proving the main results.

2.4. Supporting Results

Let

gAn(d)(m,vn)=1(m,vn)Anc (10)

denote a deterministic detector with acceptance region An[enR]×Vn for H0 and Anc for H1. Then, the type I and type II error probabilities are given by

αnfn,gn:=PMVn(Anc)=EP1(M,Vn)Anc, (11)
βnfn,gn:=QMVn(An)=EQ1(M,Vn)An. (12)

Lemma 1.

Any error exponent that is achievable is also achievable by a deterministic detector of the form given in (10) for some An[enR]×Vn, where An and Anc denote the acceptance regions for H0 and H1, respectively.

The proof of Lemma 1 is given in Appendix A for completeness. Due to Lemma 1, henceforth we restrict our attention to a deterministic gn as given in (10).

The next result shows that without loss of generality (w.l.o.g), it is also sufficient to consider gi,n(r) (in Definition 1) to be a deterministic function of the form

gi,n(r)={ϕ¯i,n(·,·,·)}i=1n (13)

for the minimization in (6), where ϕ¯i,n:M×Vn×Si1S^, i[n], denotes an arbitrary deterministic function.

Lemma 2.

The infimum in (6) is achieved by a deterministic function gi,n(r) as given in (13), and hence it is sufficient to restrict to such deterministic gi,n(r) in (6).

The proof of Lemma 2 is given in Appendix B. Next, we state some lemmas that will be handy for upper bounding the amount of privacy leakage in the proofs of the main results stated below. The following one is a well-known result proved in [60] that upper bounds the difference in entropy of two r.v.’s (with a common support) in terms of the total variation distance between their probability distributions.

Lemma 3.

([60], Lemma 2.7) Let PX and QX be distributions defined on a common support X and let ρ:=||PXQX||. Then, for ρ14

|HPXHQX|2ρlog2ρ|X|.

The next lemma will be handy in proving Theorems 1 and 2, Proposition 3 and the counter-example for strong converse presented in Section 5.

Lemma 4.

Let (Xn,Yn) denote n i.i.d. copies of r.v.’s (X,Y), and PXnYn=i=1nPXY and QXnYn=i=1nQXY denote two joint probability distributions on (Xn,Yn). For δ>0, define

Π(xn,δ,PX):=1xnT[PX]δn. (14)

If PXQX, then for δ>0 sufficiently small, there exists δ¯>0 and n0(δ,|X|,|Y|)N such that for all nn0(δ,|X|,|Y|),

QYn(·)QYn|Π(Xn,δ,PX)(·|1)enδ¯. (15)

If PX=QX, then for any δ>0, there exists δ¯>0 and n0(δ,|X|,|Y|)N such that for all nn0(δ,|X|,|Y|),

QYn(·)QYn|Π(Xn,δ,PX)(·|0)enδ¯, (16)

Also, for any δ>0, there exists δ¯>0 and n0(δ,|X|,|Y|)N such that for all nn0(δ,|X|,|Y|),

PYn(·)PYn|Π(Xn,δ,PX)(·|0)enδ¯. (17)

Proof. 

The proof is presented in Appendix C. ☐

In the next section, we establish an inner bound on Re(ϵ) and Rd(ϵ).

3. Main Results

The following two theorems are the main results of this paper providing inner bounds for Re(ϵ) and Rd(ϵ), respectively.

Theorem 1.

For ϵ(0,1), (R,κ,Λ0,Λ1)Re(ϵ) if there exists an auxiliary r.v. W, such that (V,S)UW, and

RIP(W;U|V), (18)
κκ*(PW|U,R), (19)
Λ0HP(S|W,V), (20)
Λ11PU=QUHQ(S|W,V)+1PUQUHQ(S|V), (21)

where

κ*(PW|U,R):=minE1(PW|U),E2(R,PW|U),
E1(PW|U):=minPU˜V˜W˜L1(PUW,PVW)D(PU˜V˜W˜||QUVPW|U), (22)
E2(R,PW|U):=minPU˜V˜W˜L2(PUW,PV)D(PU˜V˜W˜||QUVPW|U)+(RIP(U;W|V)),ifIP(U;W)>R,,otherwise,L1(PUW,PVW):={PU˜V˜W˜P(U×V×W):PU˜W˜=PUW,PV˜W˜=PVW},L2(PUW,PV):={PU˜V˜W˜P(U×V×W):PU˜W˜=PUW,PV˜=PV,HP(W|V)H(W˜|V˜)},PSUVW:=PSUVPW|U,andQSUVW:=QSUVPW|U. (23)

Theorem 2.

For a given bounded additive distortion measure d(·,·) and ϵ(0,1), (R,κ,Δ0,Δ1)Rd(ϵ) if there exist an auxiliary r.v. W and deterministic functions ϕ:W×VS^ and ϕ:VS^, such that (V,S)UW and (18) and (19),

Δ0minϕ(·,·)EPdS,ϕ(W,V), (24)
and Δ11PU=QUminϕ(·,·)EQdS,ϕ(W,V)+1PUQUminϕ(·)EQdS,ϕ(V), (25)

are satisfied, where PSUVW and QSUVW are as defined in Theorem 1.

The proof of Theorems 1 and 2 is given in Apppendix Appendix D. While the rate-error exponent trade-off in Theorems 1 and 2 is the same as that achieved by the Shimokawa-Han-Amari (SHA) scheme [7], the coding strategy achieving it is different due to the requirement of the privacy constraint. As mentioned above, in order to obtain a single-letter lower bound for the achievable distortion (and achievable equivocation) of the private part at the detector, it is required that the a posteriori probability distribution of Sn given the observations (M,Vn) at the detector is close in some sense to a desired “target” memoryless distribution. For this purpose, we use the so-called likelihood encoder [62,63] (at the observer) in our achievability scheme. The likelihood encoder is a stochastic encoder that induces the necessary randomness for Sn at the detector, and to the best of our knowledge has not been used before in the context of DHT. The analysis of the type I and type II error probabilities and the privacy achieved by our scheme is novel and involves the application of the well-known channel resolvability or soft-covering lemma [62,64,65]. Properties of the total variation distance between probability distributions mentioned in [43] play a key role in this analysis. The analysis also reveals the interesting fact that the coding schemes in Theorems 1 and 2, although quite different from the SHA scheme, achieves the same lower bound on the error exponent.

Theorems 1 and 2 provide single-letter inner bounds on Rd(ϵ) and Re(ϵ), respectively. A complete computable characterization of these regions would require a matching converse. This is a hard problem, since such a characterization is not available even for the DHT problem without a privacy constraint, in general (see [5]). However, it is known that a single-letter characterization of the rate-error exponent region exists for the special case of TACI [11]. In the next section, we show that TACI with a privacy constraint also admits a single-letter characterization, in addition to other optimality results.

4. Optimality Results for Special Cases

4.1. TACI with a Privacy Constraint

Assume that the detector observes two discrete memoryless sources Yn and Zn, i.e., Vn=(Yn,Zn). In TACI, the detector tests for the conditional independence of U and Y, given Z. Thus, the joint distribution of the r.v.’s under the null and alternate hypothesis are given by

H0:PSUYZ:=PS|UYZPU|ZPY|UZPZ, (26a)

and

H1:QSUYZ:=QS|UYZPU|ZPY|ZPZ, (26b)

respectively.

Let Re and Rd denote the rate-error exponent-equivocation and rate-error exponent-distortion regions, respectively, for the case of vanishing type I error probability constraint, i.e.,

Re:=limϵ0Re(ϵ)andRd:=limϵ0Rd(ϵ).

Assume that the privacy constraint under the alternate hypothesis is inactive. Thus, we are interested in characterizing the set of all tuples (R,κ,Λ0,Λ1)Re and (R,κ,Δ0,Δ1)Rd, where

Λ1Λmin:=HQ(S|U,Y,Z),and Δ1Δmin:=minϕ(u,y,z)EQdS,ϕ(U,Y,Z). (27)

Please note that Λmin and Δmin correspond to the equivocation and average distortion of Sn at the detector, respectively, when Un is available directly at the detector under the alternate hypothesis. The above assumption is motivated by scenarios, in which the observer is more eager to protect Sn when there is a correlation between its own observation and that of the detector, such as the online shopping portal example mentioned in Section 1. In that example, Un, Sn and Yn corresponds to shopping behavior, more information about the customer, and customers data available to the shopping portal, respectively.

For the above-mentioned case, we have the following results.

Proposition 1.

For the HT given in (26), (R,κ,Λ0,Λmin)Re if and only if there exists an auxiliary r.v. W, such that (Z,Y,S)UW, and

κIP(W;Y|Z), (28)
RIP(W;U|Z), (29)
Λ0HP(S|W,Z,Y), (30)

for some joint distribution of the form PSUYZW:=PSUYZPW|U.

Proof. 

For TACI, the inner bound in Theorem 1 yields that for ϵ(0,1), (R,κ,Λ0,Λ1)Re(ϵ) if there exists an auxiliary r.v. W, such that (Y,Z,S)UW, and

RIP(W;U|Y,Z), (31)
κκ*(PW|U,R), (32)
Λ0HP(S|W,Y,Z), (33)
Λ1HQ(S|W,Y,Z), (34)

where

κ*(PW|U,R):=minE1(PW|U),E2(R,PW|U),
E1(PW|U):=minPU˜Y˜Z˜W˜L1(PUW,PYZW)D(PU˜Y˜Z˜W˜||QUYZPW|U),E2(R,PW|U) (35)
:=minPU˜Y˜Z˜W˜L2(PUW,PYZ)D(PU˜Y˜Z˜W˜||QUYZPW|U)+(RIP(U;W|Y,Z)),ifIP(U;W)>R,,otherwise,L1(PUW,PYZW):={PU˜Y˜Z˜W˜P(U×Y×Z×W):PU˜W˜=PUW,PY˜Z˜W˜=PYZW},L2(PUW,PYZ):={PU˜Y˜Z˜W˜P(U×Y×Z×W):PU˜W˜=PUW,PY˜Z˜=PYZ,HP(W|Y,Z)H(W˜|Y˜Z˜)},PSUYZW:=PSUYZPW|U,QSUYZW:=QS|YZPU|ZPY|ZPZPW|U. (36)

Please note that since (Y,Z,S)UW, we have

IP(W;U)IP(W;U|Y,Z). (37)

Let B:={PW|U:IP(U;W|Z)R}. Then, for PW|UB, we have,

E1(R,PW|U)=minPU˜Y˜Z˜W˜L1(PUW,PYZW)D(PU˜Y˜Z˜W˜||QUYZPW|U)=IP(Y;W|Z),E2(R,PW|U)IP(U;W|Z)IP(U;W|Y,Z)=IP(Y;W|Z).

Hence,

κ*(PW|U,R)IP(Y;W|Z). (38)

By noting that ΛminHQ(S|W,Y,Z) (by the data processing inequality), we have shown that for Λ1Λmin, (R,κ,Λ0,Λ1)Re if (28)–(30) are satisfied. This completes the proof of achievability.

Converse: Let (R,κ,Λ0,Λ1)Re. Let T be a r.v. uniformly distributed over [n] and independent of all the other r.v.’s (Un,Yn,Zn,Sn,M). Define an auxiliary r.v. W:=(WT,T), where Wi:=(M,Yi1,Si1,Zi1,Zi+1n), i[n]. Then, we have for sufficiently large n that

nRHP(M)HP(M|Zn)IP(M;Un|Zn)=i=1nIP(M;Ui|Ui1,Zn) (39)
=i=1nIP(M,Ui1,Zi1,Zi+1n;Ui|Zi)=i=1nIP(M,Ui1,Zi1,Zi+1n,Yi1,Si1;Ui|Zi)i=1nIP(M,Zi1,Zi+1n,Yi1,Si1;Ui|Zi)=i=1nIP(Wi;Ui|Zi)=nIP(WT;UT|ZT,T) (40)
=nIP(WT,T;UT|ZT) (41)
=nIP(W;U|Z). (42)

Here, (39) follows since the sequences (Un,Zn) are memoryless; (40) follows since (Yi1,Si1)(M,Ui1,Zn)Ui form a Markov chain; and, (41) follows from the fact that T is independent of all the other r.v.’s.

The equivocation of Sn under the null hypothesis can be bounded as follows.

H(Sn|M,Yn,Zn,H=0)=i=1nH(Si|M,Si1,Yn,Zn,H=0)
i=1nH(Si|M,Yi1,Si1,Zi1,Zi+1n,Yi,Zi,H=0)=i=1nH(Si|Wi,Yi,Zi,H=0)=nH(ST|WT,YT,ZT,T,H=0) (43)
=nHP(S|W,Y,Z), (44)

where PSUYZW=PSUYZPW|U for some conditional distribution PW|U. In (43), we used the fact that conditioning reduces entropy.

Finally, we prove the upper bound on κ. For any encoding function fn and decision region AnM×Yn×Zn for H0 such that ϵn0, we have,

DPMYnZn||QMYnZnPMYnZn(An)logPMYnZn(An)QMYnZn(An)+PMYnZn(Anc)logPMYnZn(Anc)QMYnZn(Anc)H(ϵn)(1ϵn)logβ¯nfn,ϵn. (45)

Here, (45) follows from the log-sum inequality [60]. Thus,

lim supnlogβ¯nfn,ϵnnlim supn1nDPMYnZn||QMYnZn
=lim supn1nIP(M;Yn|Zn) (46)
=HP(Y|Z)lim infn1nHP(Yn|M,Zn), (47)

where (46) follows since QMYnZn=PMZnPYn|Zn. The last term can be single-letterized as follows:

HP(Yn|M,Zn)=i=1nHP(Yi|Yi1,M,Zn)i=1nHP(Yi|Yi1,Si1,M,Zn)=i=1nHP(Yi|Zi,Wi)=nHP(YT|ZT,WT,T)=nHP(Y|Z,W). (48)

Substituting (48) in (47), we obtain

κIP(Y;W|Z). (49)

Also, note that (Z,Y)UW holds. To see this, note that (Ui,Yi,Zi,Si) are i.i.d across i[n]. Hence, any information in Wi on (Yi,Zi,Si) is only through M as a function of Ui, and so given Ui, Wi is independent of (Yi,Zi,Si). The above Markov chain then follows from the fact that T is independent of (Un,Yn,Zn,Sn,M). This completes the proof of the converse and the theorem. ☐

Next, we state the result for TACI with a distortion privacy constraint, where the distortion is measured using an arbitrary distortion measure d(·,·). Let Δmin:=minϕ(u,y,z)EQdS,ϕ(U,Y,Z).

Proposition 2.

For the HT given in (26), (R,κ,Δ0,Δmin)Rd if and only if there exist an auxiliary r.v. W and a deterministic function ϕ:W×Y×ZS^ such that

RIP(W;U|Z), (50)
κIP(W;Y|Z), (51)
Δ0minϕ(·,·,·)EPdS,ϕ(W,Y,Z), (52)

for some PSUYZW as defined in Proposition 1.

Proof. 

The proof of achievability follows from Theorem 2, similarly to the way Proposition 1 is obtained from Theorem 1. Hence, only differences will be highlighted. Similar to the inequality ΛminHQ(S|U,Y,Z) in the proof of Proposition 1, we need to prove the inequality ΔminEQdS,ϕ(W,Y,Z), where QSUYZW:=QSUYZPW|U for some conditional distribution PW|U. This can be shown as follows:

minϕ(·,·,·)EQdS,ϕ(W,Y,Z)=u,y,zQUYZ(u,y,z)wPW|U(w|u)minϕ(w,y,z)sQS|UYZ(s|u,y,z)d(s,ϕ(w,y,z))u,y,zQUYZ(u,y,z)w,sPW|U(w|u)QS|UYZ(s|u,y,z)d(s,ϕ*(u,y,z))u,y,zQUYZ(u,y,z)minϕ(u,y,z)w,sPW|U(w|u)QS|UYZ(s|u,y,z)d(s,ϕ(u,y,z))=u,y,zQUYZ(u,y,z)minϕ(u,y,z)sQS|UYZ(s|u,y,z)d(s,ϕ(u,y,z))=minϕ(·,·,·)EQdS,ϕ(U,Y,Z):=Δmin, (53)

where in (53), ϕ*(u,y,z) is chosen such that

ϕ*(u,y,z):=arg minϕ(w,y,z),wWsQS|UYZ(s|u,y,z)d(s,ϕ(w,y,z)).

Converse: Let W=(WT,T) denote the auxiliary r.v. defined in the converse of Proposition 1. Inequalities (50) and (51) follow similarly as obtained in Proposition 1. We prove (52). Defining ϕ˜n(M,Yn,Zn,Si1,i):=ϕ¯i,n(M,Yn,Zn,Si1), we have

mingi,n(r)EdSn,S^n|H=0=min{ϕ˜n(m,yn,zn,si1,i)}i=1nEi=1ndSi,ϕ˜n(M,Yn,Zn,Si1,i)|H=0=min{ϕ˜n(·,·,·,·,·)}i=1nEi=1ndSi,ϕ˜n(Wi,Zi,Yi,Yi+1n,i)|H=0min{ϕ(wi,zi,yi,i)}Ei=1ndSi,ϕ(Wi,Zi,Yi,i)|H=0=nmin{ϕ(·,·,·,·)}EEdST,ϕ(WT,ZT,YT,T)|T|H=0=nmin{ϕ(·,·,·,·)}EdST,ϕ(WT,ZT,YT,T)|H=0=nmin{ϕ(w,z,y)}EdS,ϕ(W,Z,Y)|H=0, (54)

where (54) is due to (A1) (in Appendix B). Hence, any Δ0 satisfying (6) satisfies

Δ0min{ϕ(w,z,y)}EPdS,ϕ(W,Z,Y).

This completes the proof of the converse and the theorem. ☐

A more general version of Propositions 1 and 2 is claimed in [66] as Theorems 7 and 8, respectively, in which a privacy constraint under the alternate hypothesis is also imposed. However, we have identified a mistake in the converse proof; and hence, a single-letter characterization for this general problem remains open.

To complete the single-letter characterization in Propositions 1 and 2, we bound the alphabet size of the auxiliary r.v. W in the following lemma, whose proof is given in Appendix E.

Lemma 5.

In Propositions 1 and 2, it suffices to consider auxiliary r.v.’s W such that |W||U|+2.

The proof of Lemma 5 uses standard arguments based on the Fenchel–Eggleston–Carathéodory’s theorem and is given in Appendix E.

Remark 1.

When QS|UYZ=QS|YZ, a tight single-letter characterization of Re and Rd exists even if the privacy constraint is active under the alternate hypothesis. This is due to the fact that given Yn and Zn, M is independent of Sn under the alternate hypothesis. In this case, (R,κ,Λ0,Λ1)Re if and only if there exists an auxiliary r.v. W, such that (Z,Y,S)UW, and

κIP(W;Y|Z), (55)
RIP(W;U|Z), (56)
Λ0HP(S|W,Z,Y), (57)
Λ1HQ(S|Z,Y), (58)

for some PSUYZW as in Proposition 1. Similarly, we have that (R,κ,Δ0,Δ1)Rd if and only if there exist an auxiliary r.v. W and a deterministic function ϕ:W×Y×ZS^ such that (55) and (56),

Δ0minϕ(·,·,·)EPdS,ϕ(W,Y,Z), (59)
Δ1minϕ(·,·,·)EQdS,ϕ(Y,Z), (60)

are satisfied for some PSUYZW as in Proposition 1.

The computation of the trade-off given in Proposition 1 is challenging despite the cardinality bound on the auxiliary r.v. W provided by Lemma 5, as closed form solutions do not exist in general. To see this, note that the inequality constraints defining Re are not convex in general, and hence even computing specific points in the trade-off could be a hard problem. This is evident from the fact that in the absence of the privacy constraint in Proposition 1, i.e., (30), computing the maximum error exponent for a given rate constraint is equivalent to the information bottleneck problem [67], which is known to be a hard non-convex optimization problem. Also, the complexity of brute force search is exponential in |U|, and hence intractable for large values of |U|. Below we provide an example which can be solved in closed form and hence computed easily.

Example 1.

Let V=U=S={0,1}, V=Y, Z= constant, VSU, PU(0)=QU(0)=0.5, PS|U(0|0)=PS|U(1|1)=QS|U(0|0)=QS|U(1|1)=1q, PV|S(0|0)=PV|S(1|1)=1p and QV|S(0|0)=QV|S(1|1)=0.5. Then, (R,κ,Λ0,0)Re if there exists r[0,0.5] such that

R1hb(r), (61)
κ1hb((r*q)*p), (62)
Λ0hb(p)+hb(q*r)hb(p*(q*r)), (63)

where for a,bR, a*b:=(1a)·b+(1b)·a, and hb:[0,1][0,1] is the binary entropy function given by

hb(t)=(1t)log(1t)tlog(t).

The above characterization (Numerical computation shows that the characterization given in (61)–(63) is exact even when q(0,1).) is exact for q=0, i.e., (R,κ,Λ0,0)Re only if there exists r[0,0.5] such that (61)–(63) are satisfied.

Proof. 

Taking W={0,1}, and PW|U(0|0)=PW|U(1|1)=1r, the constraints defining the trade-off given in Proposition 1 simplifies to

IP(U;W)=1hb(r),IP(V;W)=1hb((r*q)*p),HP(S|V,W)=HP(S|W)IP(S;V|W)=HP(S|W)+HP(V|S)HP(V|W)=hb(r*q)+hb(p)hb(p*(q*r)).

On the other hand, if q=0, note that S=U. Hence, the same constraints can be bounded as follows:

IP(U;W)=1HP(U|W),
IP(V;W)=1HP(V|W)1hbhb1(H(U|W))*p,HP(U|V,W)=HP(U|W)+HP(V|U)HP(V|W) (64)
hb(p)+HP(U|W)hbhb1(HP(U|W))*p, (65)

where hb1:[0,1][0,0.5] is the inverse of the binary entropy function. Here, the inequality in (64) and (65) follows by an application of Mrs Gerber’s lemma [68], since V=UNp under the null hypothesis and NpBer(p) is independent of U and W. Also, Λmin=0 since S=U. Noting that HP(U|W)[0,1], and defining r:=hb1(HP(U|W))[0,0.5], the result follows. ☐

Figure 2 depicts the curve 1hb(r),1hb(p*(q*r)),hb(p)+hb(r*q)hb(p*(r*q)) for q=0 and p{0.15,0.25,0.35}, as r is varied in the range [0,0.5]. The projection of this curve on the Rκ and κΛ0 plane is shown in Figure 3a,b, respectively, for q{0,0.1} and the same values of p. As expected, the error exponent κ increases with rate R while the equivocation Λ0 decreases with κ at the boundary of Re.

Figure 2.

Figure 2

(R,κ,Λ0) trade-off at the boundary of Re in Example 1 (Axes units are in bits)

Figure 3.

Figure 3

Projections of Figure 2 in the Rκ plane and κΛ0 plane

Proposition 1 (resp. Proposition 2) provide a characterization of Re (resp. Rd) under the condition of vanishing type I error probability constraint. Consequently, the converse part of these results are known as weak converse results in the context of HT. In the next subsection, we establish the optimal error exponent-privacy trade-off for the special case of zero-rate compression. This trade-off is independent of the type I error probability constraint ϵ(0,1), and hence known as a strong converse result.

4.2. Zero-Rate Compression

Assume the following zero-rate constraint on the communication between the observer and the detector,

limnlog(|M|)n=0. (66)

Please note that (66) does not imply that |M|=0, i.e., nothing can be transmitted, but that the message set cardinality can grow at most sub-exponentially in n. Such a scenario is motivated practically by low power or low bandwidth constrained applications in which communication is costly. Propositions 3 and 4 stated below provide an optimal single-letter characterization of Rd(ϵ) and Re(ϵ) in this case. While the coding schemes in the achievability part of these results are inspired from that in [6], the analysis of privacy achieved at the detector is new. Lemma 4 serves as a crucial tool for this purpose. We next state the results. Let

Δ0max:=minϕ(·)EPdS,ϕ(V), (67a)
and Δ1max:=minϕ(·)EQdS,ϕ(V). (67b)

Proposition 3.

For ϵ(0,1), (0,κ,Δ0,Δ1)Rd(ϵ) if and only if it satisfies,

κminPU˜V˜L(PU,PV)D(PU˜V˜||QUV), (68)
Δ0Δ0max, (69)
Δ1Δ1max, (70)

where ϕ:VS^ is a deterministic function and

L(PU,PV)={PU˜V˜P(U×V):PU˜=PU,PV˜=PV}.

Proof. 

First, we prove that (0,κ,Δ0,Δ1) satisfying (68)–(70) is achievable. While the encoding and decoding scheme is the same as that in [6], we mention it for the sake of completeness.

Encoding: The observer sends the message M=1 if UnT[PU]δn, δ>0, and M=0 otherwise.

Decoding: The detector declares H^=0 if M=1 and VnT[PV]δn, δ>0. Otherwise, H^=1 is declared.

We analyze the type I and type II error probabilities for the above scheme. Please note that for any δ>0, the weak law of large numbers implies that

PUnT[PU]δnVnT[PV]δn)|H=0=PM=1VnT[PV]δn)|H=0(n)1.

Hence, the type I error probability tends to zero, asymptotically. The type II error probability can be written as follows:

βn(fn,gn)=P(UnT[PU]δnVnT[PV]δn)|H=1)=unT[PU]δn,vnT[PV]δQUnVn(un,vn)(n+1)|U||V|en(κ*O(δ))=enκ*|U||V|log(n+1)nO(δ),

where

κ*=minPU˜V˜L(PU,PV)D(PU˜V˜||QUV).

Next, we lower bound the average distortion for Sn achieved by this scheme at the detector. Defining

Π(Un,δ,PU):=1UnT[PU]δn, (71)
ρn(0)(δ):=PSnVn(·)PSnVn|Π(Un,δ,PU)(·|0),, (72)
ρn(1)(δ):=QSnVn(·)QSnVn|Π(Un,δ,PU)(·|1),ϕn(vn):=(ϕ(v1),,ϕ(vn)), (73)

we can write

|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=0nminϕ(v)EPdS,ϕ(V)|=|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=0minϕn(vn)EdSn,ϕn(Vn)|H=0||min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=0PM=1|H=0minϕn(vn)EdSn,ϕn(Vn)|M=1,H=0|+PM=0|H=0minϕn(vn)EdSn,ϕn(Vn)|M=0,H=0|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=0minϕn(vn)EdSn,ϕn(Vn)|M=1,H=0|+PM=0|H=0[minϕn(vn)EdSn,ϕn(Vn)|M=1,H=0+minϕn(vn)EdSn,ϕn(Vn)|M=0,H=0]=|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=0minϕn(vn)EdSn,ϕn(Vn)|Π(Un,δ,PU)=0,H=0|+PΠ(Un,δ,PU)=1|H=0[minϕn(vn)EdSn,ϕn(Vn)|M=1,H=0+minϕn(vn)EdSn,ϕn(Vn)|M=0,H=0] (74)
nDmρn(0)(δ)+2enΩ(δ)nDm (75)
(n)0, (76)

where (74) is since Π(Un,δ,PU)=1M with probability one by the encoding scheme; (75) follows from

PΠ(Un,δ,PU)=1|H=0=PUnT[PU]δn|H=0enΩ(δ) (77)

and ([43], Property 2(b)); and, (76) is due to (17). Similarly, it can be shown using (16) that if QU=PU, then

|min{ϕ¯i,n(m,vn,si1)}i=1nEdSn,S^n|H=1nminϕ(v)EQdS,ϕ(V)|(n)0. (78)

On the other hand, if QUPU and δ is small enough, we have

PM=0|H=1=PΠ(Un,δ,PU)=1|H=11en(D(PU||QU)O(δ))(n)1. (79)

Hence, we can write for δ small enough,

|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=1nminϕ(v)EQdS,ϕ(V)|=|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=1minϕn(vn)EdSn,ϕn(Vn)|H=1||min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=1PM=0|H=0minϕn(vn)EdSn,ϕn(Vn)|M=0,H=1|+PM=1|H=1minϕn(vn)EdSn,ϕn(Vn)|M=1,H=1|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=1minϕn(vn)EdSn,ϕn(Vn)|M=0,H=1|+PM=1|H=1[minϕn(vn)EdSn,ϕn(Vn)|M=1,H=1+minϕn(vn)EdSn,ϕn(Vn)|M=0,H=1]=|min{ϕ¯i(m,vn,si1)}i=1nEdSn,S^n|H=1minϕn(vn)EdSn,ϕn(Vn)|Π(Un,δ,PU)=1,H=1|+PΠ(Un,δ,PU)=0|H=1[minϕn(vn)EdSn,ϕn(Vn)|M=1,H=1+minϕn(vn)EdSn,ϕn(Vn)|M=0,H=1] (80)
nDmρn(1)(δ)+2en(D(PU||QU)O(δ))nDm (81)
(n)0, (82)

where (80) is since Π(Un,δ,PU)=1M with probability one; (81) is due to (79) and ([43], Property 2(b)); and, (82) follows from (15). This completes the proof of the achievability.

We next prove the converse. Please note that by the strong converse result in [8], the right hand side (R.H.S) of (68) is an upper bound on the achievable error exponent for all ϵ(0,1) even without a privacy constraint (hence, also with a privacy constraint). Also,

mingi,n(r)EdSn,S^n|H=0min{ϕ(vi)}i=1ni=1nEPSiVidSi,ϕ(Vi)=nmin{ϕ(v)}EPd(S,ϕ(V)). (83)

Here, (83) follows from the fact that the detector can always reconstruct S^i as a function of Vi for i[n]. Similarly,

mingi,n(r)EdSn,S^n|H=1nmin{ϕ(v)}EQd(S,ϕ(V)).

Hence, any achievable Λ0 and Λ1 must satisfy (69) and (70), respectively. This completes the proof. ☐

The following Proposition is the analogous result to Proposition 3 when the privacy measure is equivocation.

Proposition 4.

For ϵ(0,1), (0,κ,Λ0,Λ1)Re(ϵ) if and only if it satisfies (68) and

Λ0HP(S|V), (84)
Λ1HQ(S|V). (85)

Proof. 

For proving the achievability part, the encoding and decoding scheme is the same as in Proposition 3. Hence, the analysis of the error exponent given in Proposition 3 holds. To lower bound the equivocation of Sn at the detector, defining Π(Un,δ,PU), ρn(0)(δ) and ρn(1)(δ) as in (71)–(73), we can write

|nHP(S|V)H(Sn|M,Vn,H=0)|=|H(Sn|Vn,H=0)H(Sn|M,Vn,H=0)||H(Sn,Vn|H=0)H(Sn,Vn|M,H=0)||H(Sn,Vn|H=0)PM=1|H=0H(Sn,Vn|M=1,H=0)|+PM=0|H=0H(Sn,Vn|M=0,H=0)|H(Sn,Vn|H=0)H(Sn,Vn|M=1,H=0)|+PM=0|H=0H(Sn,Vn|M=1,H=0)+H(Sn,Vn|M=0,H=0)|H(Sn,Vn|H=0)H(Sn,Vn|Π(Un,δ,PU)=0,H=0)|+PΠ(Un,δ,PU)=1|H=0H(Sn,Vn|M=1,H=0)+H(Sn,Vn|M=0,H=0)(n)2ρn(0)(δ)logρn(0)(δ)|S|n|V|n+2enΩ(δ)log|S|n|V|n (86)
(n)0, (87)

where (86) follows due to Lemma 3, ([60], Lemma 2.12) and the fact that entropy of a r.v. is bounded by the logarithm of cardinality of its support; and, (87) follows from (17) in Lemma 4 since δ>0. In a similar way, it can be shown using (16) that if QU=PU, then

|H(Sn|Vn,H=1)H(Sn|M,Vn,H=1)|(n)0. (88)

On the other hand, if QUPU and δ is small enough, we can write

|nHQ(S|V)H(Sn|M,Vn,H=1)|=|H(Sn|Vn,H=1)H(Sn|M,Vn,H=1)||H(Sn,Vn|H=1)H(Sn,Vn|M,H=1)||H(Sn,Vn|H=1)H(Sn,Vn|M=0,H=1)|+PΠ(Un,δ,PU)=0|H=1H(Sn,Vn|M=0,H=1)+H(Sn,Vn|M=1,H=1)2ρn(1)(δ)logρn(1)(δ)|S|n|V|n+2en(D(PU||QU)O(δ))log|S|n|V|n, (89)

where (89) follows from Lemma 3 and (79). It follows from (15) in Lemma 4 that for δ>0 sufficiently small, ρn(1)(δ)enδ¯ for some δ¯>0, thus implying that the R.H.S. of (89) tends to zero. This completes the proof of achievability.

The converse follows from the results in [6,8] that the R.H.S of (68) is the optimal error exponent achievable for all values of ϵ(0,1) even when there is no privacy constraint, and the following inequality

H(Sn|M,Vn,H=j)H(Sn|Vn,H=j),j=0,1. (90)

This concludes the proof of the Proposition. ☐

In Section 2.2, we mentioned that it is possible to achieve a positive error exponent with perfect privacy in our model. Here, we provide an example of TAI with an equivocation privacy constraint under both hypothesis, and show that perfect privacy is possible. Recall that TAI is a special case of TACI, in which Z= constant, and hence, the null and alternate hypothesis are given by

H0:(Un,Yn)i=1nPUY,and H1:(Un,Yn)i=1nPUPY.

Example 2.

Let S=U={0,1,2,3}, Y={0,1},

PSU=0.125·1100110000110011,PY|U=10011001,

PSUY:=PSUPY|U and QSUY:=PSUPY, where PY=uUPU(u)PY|U(y|u). Then, we have HQ(S|Y)=HP(S)=HP(U)=2 bits. Also, noting that under the null hypothesis, Y=Umod2, HP(S|Y)=2 bits. It follows from the inner bound given by Equations (31)–(34), and, (37) and (38) that (R,κ,Λ0,Λ1)Re(ϵ), ϵ(0,1) if

RIP(W;U),κIP(W;Y),Λ0HP(S|W,Y),Λ1HQ(S|W,Y)=HQ(S|W),

where PSUYW:=PSUYPW|U and QSUYW:=QSUYPW|U for some conditional distribution PW|U. If we set W:=Umod2, then we have IP(U;W)=1 bit, IP(Y;W)=HP(Y)=1 bit, HP(S|W,Y)=HP(S|Y)=2 bits, and HQ(S|W)=HP(S|Y)=2 bits. Thus, by revealing only W to the detector, it is possible to achieve a positive error exponent while ensuring maximum privacy under both the null and alternate hypothesis, i.e., the tuple (1,1,2,2)Re(ϵ), ϵ(0,1).

5. A Counter-Example to the Strong Converse

Ahlswede and Csiszár obtained a strong converse result for the DHT problem without a privacy constraint in [5], where they showed that for any positive rate R, the optimal achievable error exponent is independent of the type I error probability constraint ϵ. Here, we explore whether a similar result holds in our model, in which an additional privacy constraint is imposed. We will show through a counter-example that this is not the case in general. The basic idea used in the counter-example is a “time-sharing” argument which is used to construct from a given coding scheme that achieves the optimal rate-error exponent-equivocation trade-off under a vanishing type I error probability constraint, a new coding scheme that satisfies the given type I error probability constraint ϵ* and the same error exponent as before, yet achieves a higher equivocation for Sn at the detector. This concept has been used previously in other contexts, e.g., in the characterization of the first-order maximal channel coding rate of additive white gaussian noise (AWGN) channel in the finite block-length regime [69], and subsequently in the characterization of the second order maximal coding rate in the same setting [70]. However, we will provide a self-contained proof of the counter-example by using Lemma 4 for this purpose.

Assume that the joint distribution PSUV is such that HP(S|U,V)<HP(S|V). Proving the strong converse amounts to showing that any (R,κ,Λ0,Λ1)Re(ϵ) for some ϵ(0,1) also belongs to Re. Consider TAI problem with an equivocation privacy constraint, in which RHP(U) and Λ1Λmin. Then, from the optimal single-letter characterization of Re given in Proposition 1, it follows by taking W=U that (HP(U),IP(V;U),HP(S|V,U),Λmin)Re. Please note that IP(V;U) is the maximum error exponent achievable for any type I error probability constraint ϵ(0,1), even when Un is observed directly at the detector. Thus, for vanishing type I error probability constraint ϵ0 and κ=IP(V;U), the term HP(S|V,U) denotes the maximum achievable equivocation for Sn under the null hypothesis. From the proof of Proposition 1, the coding scheme achieving this tuple is as follows:

  1. Quantize un to codewords in Bn={un(j)T[PU]δn,j[en(HP(U)+η)]} and send the index of quantization to the detector, i.e., if unT[PU]δn, send M=j, where j is the index of un in Bn. Else, send M=0.

  2. At the detector, if M=0, declare H^=1. Else, declare H^=0 if (un(M),vn)T[PUV]δn for some δ>δ, and H^=1 otherwise.

The type I error probability of the above scheme tends to zero asymptotically with n. Now, for a fixed ϵ*>0, consider a modification of this coding scheme as follows:

  1. If unT[PU]δn, send M=j with probability 1ϵ*, where j is the index of un in Bn, and with probability ϵ*, send M=0. If unT[PU]δn, send M=0.

  2. At the detector, if M=0, declare H^=1. Else, declare H^=0 if un(M),vn)T[PUV]δn for some δ>δ, and H^=1 otherwise.

It is easy to see that for this modified coding scheme, the type I error probability is asymptotically equal to ϵ*, while the error exponent remains the same as I(V;U) since the probability of declaring H^=0 is decreased. Recalling that Π(un,δ,PU):=1unT[PU]δn, we also have

1nHSn|M,Vn,H=0=(1γn)(1ϵ*)1nHSn|Un,Vn,Π(Un,δ,PU)=0,H=0+(1γn)ϵ*1nHSn|M=0,Vn,Π(Un,δ,PU)=0,H=0+γn1nHSn|M=0,Vn,Π(Un,δ,PU)=1,H=0(1γn)(1ϵ*)HPS|U,Vγn+(1γn)ϵ*1nHSn|M=0,Vn,Π(Un,δ,PU)=0,H=0+γn1nHSn|M=0,Vn,Π(Un,δ,PU)=1,H=0>(1γn)(1ϵ*)HPS|U,Vγn+(1γn)ϵ*HP(S|U,V)γnn (91)
+γn1nHSn|M=0,Vn,H=0,Π(Un,δ,PU)=1 (92)
=(1γn)(1ϵ*)HPS|U,Vγn+(1γn)ϵ*HPS|U,Vγnn+γn (93)
=(1γn)HPS|U,Vγ¯n, (94)

where {γn}nN denotes some sequence of positive numbers such that γn(n)0, and

γn:=PUnT[PU]δn|H=0enΩ(δ)(n)0,γn:=2ρn*log2ρn*|S|n, (95)
ρn*:=PSnVn|Π(Un,δ,PU),M(·|0,0)PSnVn(·)=PSnVn|Π(Un,δ,PU)(·|0)PSnVn(·), (96)
γn:=γnnH(Sn|M=0,Vn,,H=0,Π(Un,δ,PU)=1)(n)0,γ¯n:=(1γn)(1ϵ*)γn+(1γn)ϵ*γnnγn. (97)

Equation (91) follows similarly to the proof of Theorem 1 in [71]. Equation (92) is obtained as follows:

1nHSn|M=0,Vn,IU(Un,δ)=0,H=0
1nHSn|Vn,H=0γnn (98)
>HP(S|U,V)γnn. (99)

Here, (98) is obtained by an application of Lemma 3; and (99) is due to the assumption that HP(S|U,V)<HP(S|V).

It follows from Lemma 4 that ρn*(n)0, which in turn implies that

γnn(n)0. (100)

From (95), (97) and (100), we have that γ¯n(n)0. Hence, Equation (94) implies that (HP(U),IP(V;U),Λ0*,Λmin)Re(ϵ*) for some Λ0*>HP(S|U,V). Since (HP(U),IP(V;U),Λ0*,Λmin)Re, this implies that in general, the strong converse does not hold for HT with an equivocation privacy constraint. The same counter-example can be used in a similar manner to show that the strong converse does not hold for HT with an average distortion privacy constraint either.

6. Conclusions

We have studied the DHT problem with a privacy constraint, with equivocation and average distortion under a causal disclosure assumption as the measures of privacy. We have established a single-letter inner bound on the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. We have also obtained the optimal rate-error exponent-equivocation and rate-error exponent-distortion trade-offs for two special cases, when the communication rate over the channel is zero, and for TACI under a privacy constraint. It is interesting to note that the strong converse for DHT does not hold when there is an additional privacy constraint in the system. Extending these results to the case when the communication between the observer and detector takes place over a noisy communication channel is an interesting avenue for future research. Yet another important topic worth exploring is the trade-off between rate, error probability and privacy in the finite sample regime for the setting considered in this paper.

Abbreviations

The following abbreviations are used in this manuscript:

HT Hypothesis testing
DHT Distributed hypothesis testing
TACI Testing against conditional independence
TAI Testing against independence
DP Differential privacy
KL Kullback–Leibler
SHA Shimokawa-Han-Amari

Appendix A. Proof of Lemma 1

Please note that for a stochastic detector, the type I and type II error probabilities are linear functions of PH^|M,Vn. As a result, for each fixed n and fn, αnfn,gn and βnfn,gn for a stochastic detector gn can be thought of as the type I and type II errors achieved by “time-sharing” among a finite number of deterministic detectors. To see this, consider some ordering on the elements of the set M×Vn and let νi:=PH^|M,Vn(0|i), i[1:N], where i denotes the ith element of M×Vn and N=|M×Vn|. Then, we can write

PH^|M,Vn=ν11ν1ν21ν2νN1νN.

Then, it is easy to see that PH^|M,Vn=i=1NνiIi, where Ii:=[ei1ei] and ei is an N length vector with 1 at the ith component and 0 elsewhere. Now, suppose (αn(1),βn(1)) and (αn(2),βn(2)) denote the pair of type I and type II error probabilities achieved by deterministic detectors gn(1) and gn(2), respectively. Let A1,n and A2,n denote their corresponding acceptance regions for H0. Let gn(θ) denote the stochastic detector formed by using gn(1) and gn(2) with probabilities θ and 1θ, respectively. From the above-mentioned linearity property, it follows that gn(θ) achieves type I and type II error probabilities of αnfn,gn(θ)=θαn(1)+(1θ)αn(2) and βnfn,gn(θ)=θβn(1)+(1θ)βn(2), respectively. Let r(θ)=min(θ,1θ). Then, for θ(0,1),

1nlogβnfn,gn(θ)min1nlogβn(1),1nlogβn(2)1nlog(r(θ)).

Hence, either

αn(1)αnfn,gn(θ)and1nlogβn(1)1nlogβnfn,gn(θ)+1nlog(r(θ)),

or

αn(2)αnfn,gn(θ)and1nlogβn(2)1nlogβnfn,gn(θ)+1nlog(r(θ)).

Thus, since 1nlog(r(θ))(n)0, a stochastic detector does not offer any advantage over deterministic detectors in the trade-off between the error exponent and the type I error probability.

Appendix B. Proof of Lemma 2

Let P˜SnUnVnMS^n(Cn,0)=PSnUnVnMi=1nP˜S^i|M,Vn,Si1 and P˜SnUnVnMS^n(Cn,1)=QSnUnVnMi=1nP˜S^i|M,Vn,Si1 denote the joint distribution of the r.v.’s (Sn,Un,Vn,M,S^n) under hypothesis H0 and H1, respectively, where P˜S^i|M,Vn,Si1 denotes gi,n(r) for i[n]. Then, we have

mingi,n(r)EdSn,S^n|H=j=minP˜S^i|M,Vn,Si1i=1nEP˜(j)dSn,S^n=minP˜S^i|M,Vn,Si1i=1n1ni=1nEP˜(j)dSi,S^i=1ni=1n(m,vn,si1)P˜MVnSi1(j)(m,vn,si1)minP˜S^i|M,Vn,Si1(·|m,vn,si1)s^iP˜S^i|M,Vn,Si1(s^i|m,vn,si1)EP˜Si|M,Vn,Si1(j)(·|m,vn,si1)dSi,s^i=1ni=1nm,vn,si1P˜MVnSi1(j)(m,vn,si1)EP˜Si|M,Vn,Si1(j)(·|m,vn,si1)dSi,ϕij(m,vn,si1),

where

ϕij(m,vn,si1)=arg mins^S^EP˜Si|M,Vn,Si1(j)(·|m,vn,si1)d(Si,s^).

Continuing, we have

mingi,n(r)EdSn,S^n|H=j=1ni=1nm,vn,si1P˜MVnSi1(j)(m,vn,si1)minϕi(m,vn,si1)EP˜Si|M,Vn,Si1(j)(·|m,vn,si1)dSi,ϕi(m,vn,si1)=min{ϕi(m,vn,si1)}i=1n1ni=1nEP˜(j)dSi,ϕi(M,Vn,Si1). (A1)

This completes the proof.

Appendix C. Proof of Lemma 4

We will first prove (15). Fix δ>0. For γ>0, define the following sets:

B0,γδ:=ynT[PY]γn:PYn(yn)PYn|Π(Xn,δ,PX)(yn|0), (A2)
C0,γδ:=ynT[PY]γn:PYn(yn)<PYn|Π(Xn,δ,PX)(yn|0),B1,γδ:=ynT[QY]γn:QYn(yn)QYn|Π(Xn,δ,PX)(yn|0),C1,γδ:=ynT[QY]γn:QYn(yn)<QYn|Π(Xn,δ,PX)(yn|0),B2,γδ:=ynT[QY]γn:QYn(yn)QYn|Π(Xn,δ,PX)(yn|1),C2,γδ:=ynT[QY]γn:QYn(yn)<QYn|Π(Xn,δ,PX)(yn|1). (A3)

Then, we can write

QYn(·)QYn|Π(Xn,δ,PX)(·|1)=12yn|QYn(yn)QYn|Π(Xn,δ,PX)(yn|1)|=12ynT[QY]γn|QYn(yn)QYn|Π(Xn,δ,PX)(yn|1)|+12ynT[QY]γn|QYn(yn)QYn|Π(Xn,δ,PX)(yn|1)|12ynT[QY]γnQYn(yn)+QYn|Π(Xn,δ,PX)(yn|1)+12ynT[QY]γn|QYn(yn)QYn|Π(Xn,δ,PX)(yn|1)|. (A4)

Next, note that

QYn|Π(Xn,δ,PX)(yn|1)=QYn(yn)QΠ(Xn,δ,PX)|Yn(1|yn)QΠ(Xn,δ,PX)(1)QYn(yn)QΠ(Xn,δ,PX)(1)2QYn(yn), (A5)

for sufficiently large n (depending on |X|), since QΠ(Xn,δ,PX)(1)(n)1. Thus, for n large enough,

ynT[QY]γnQYn(yn)+QYn|Π(Xn,δ,PX)(yn|1)3ynT[QY]γnQYn(yn)enΩ(γ). (A6)

We can bound the last term in (A4) as follows:

ynT[QY]γn|QYn(yn)QYn|Π(Xn,δ,PX)(yn|1)|=ynB2,γδQYn(yn)QYn|Π(Xn,δ,PX)(yn|1)+ynC2,γδQYn|Π(Xn,δ,PX)(yn|1)QYn(yn)=ynB2,γδQYn(yn)QYn|Π(Xn,δ,PX)(yn|1)+ynC2,γδQYn|Π(Xn,δ,PX)(yn|1)QYn(yn)=ynB2,γδQYn(yn)1QYn|Π(Xn,δ,PX)(yn|1)QYn(yn)+ynC2,γδQYn(yn)QYn|Π(Xn,δ,PX)(yn|1)QYn(yn)1=ynB2,γδQYn(yn)1QΠ(Xn,δ,PX)|Yn(1|yn)QΠ(Xn,δ,PX)(1)+ynC2,γδQYn(yn)QΠ(Xn,δ,PX)|Yn(1|yn)QΠ(Xn,δ,PX)(1)1 (A7)
ynB2,γδQYn(yn)1QΠ(Xn,δ,PX)|Yn(1|yn)+ynC2,γδQYn(yn)1QΠ(Xn,δ,PX)(1)1. (A8)

Let PY˜ denote the type of yn and define

En(δ,γ):=minPY˜PnT[QY]γnminPX˜PnT[PX]δnD(PX˜|Y˜||QX|Y|PY˜).

Then, for ynT[QY]γn, arbitrary γ˜>0 and n sufficiently large (depending on |X|,|Y|,δ,γ), it follows from ([60], Lemma 2.6) that

QΠ(Xn,δ,PX)|Yn(1|yn)1enEn(δ,γ)γ˜, (A9)
and QΠ(Xn,δ,PX)(1)1en(D(PX||QX)γ˜). (A10)

From (A4), (A6) and (A8)–(A10), it follows that

QYn(·)QYn|Π(Xn,δ,PX)(·|1)enΩ(γ)+enEn(δ,γ)γ˜+en(D(PX||QX)γ˜). (A11)

We next show that En(δ,γ)>0 for sufficiently small δ>0 and γ>0. This would imply that the R.H.S of (A11) converges exponentially to zero (for γ˜ small enough) with exponent δ¯:=minΩ(γ),En(δ,γ)γ˜,D(PX||QX)γ˜, thus proving (15). We can write,

En(δ,γ)minPY˜T[QY]γnminPX˜T[PX]δnD(PX˜||Q^X) (A12)
2minPY˜T[QY]γnminPX˜T[PX]δnPX˜Q^X2, (A13)

where

Q^X(x):=yPY˜(y)QX|Y(x|y).

Here, (A12) follows due to the convexity of KL divergence (A13) is due to Pinsker’s inequality [60]. We also have from the triangle inequality satisfied by total variation that,

PX˜Q^XPXQXPX˜PXQ^XQX.

For ynT[QY]γn,

Q^XQXQX|YPY˜QXYPY˜QYO(γ).

Also, for PX˜T[PX]δn,

PX˜PXO(δ).

Hence,

En(δ,γ)2PXQXO(γ)O(δ)2.

Since PXQX, En(δ,γ)>0 for sufficiently small γ>0 and δ>0. This completes the proof of (15).

We next prove (17). Similar to (A4) and (A5), we have,

PYn(·)PYn|Π(Xn,δ,PX)(·|0)12ynT[PY]γnPYn(yn)+PYn|Π(Xn,δ,PX)(yn|0)+12ynT[PY]γn|PYn(yn)PYn|Π(Xn,δ,PX)(yn|0)|, (A14)

and

PYn|Π(Xn,δ,PX)(yn|0)2PYn(yn), (A15)

since PΠ(Xn,δ,PX)(0)(n)1.

Also, for γ<δ|Y| and sufficiently large n (depending on δ,γ,|X|,|Y|), we have

ynT[PY]γn|PYn(yn)PYn|Π(Xn,δ,PX)(yn|0)|=ynB0,γδPYn(yn)PYn|Π(Xn,δ,PX)(yn|0)+ynC0,γδPYn|Π(Xn,δ,PX)(yn|0)PYn(yn)ynB0,γδPYn(yn)1PΠ(Xn,δ,PX)|Yn(0|yn)+ynC0,γδPYn(yn)1PΠ(Xn,δ,PX)(0)1ynB0,γδPYn(yn)enΩ(δγ|Y|)+ynC0,γδPYn(yn)enΩ(δ) (A16)
enΩ(δγ|Y|), (A17)

where to obtain (A16), we used

PΠ(Xn,δ,PX)(0)1enΩ(δ), (A18)
and PΠ(Xn,δ,PX)|Yn(0|yn)1enΩ(δγ|Y|),forynB0,γδandγ<δ|Y|. (A19)

Here, (A18) follows from ([60], Lemma 2.12), and (A19) follows from ([60], Lemmas 2.10 and 2.12), respectively. Thus, from (A14), (A15) and (A17), we can write that,

PYn(·)PYn|Π(Xn,δ,PX)(·|0)enΩ(γ)+enΩ(δγ|Y|)(n)0.

This completes the proof of (17). The proof of (16) is exactly the same as (17), with the only difference that the sets B1,γδ and C1,γδ are used in place of B0,γδ and C0,γδ, respectively.

Appendix D. Proof of Theorems 1 and 2

We describe the encoding and decoding operations which are the same for both Theorems 1 and 2. Fix positive numbers (small) η,δ>0, and let δ:=δ2,δ^:=|U|δ,δ˜:=2δ and δ¯:=δ|V|.

Codebook Generation: Fix a finite alphabet W and a conditional distribution PW|U. Let Bn=Wn(j),j[Mn], Mn:=en(IP(U:W)+η), denote a random codebook such that each Wn(j) is randomly and independently generated according to distribution i=1nPW(wi), where

PW(w)=uUPU(u)PW|U(w|u).

Denote a realization of Bn by Bn and the support of Bn by Bn.

Encoding: For a given codebook Bn, let

PEu(Bn)(j|un):=i=1nPU|W(ui|wi(j))ji=1nPU|W(ui|wi(j))), (A20)

denote the likelihood encoding function. If IP(U;W)+η+|U||W|log(n+1)n>R, the observer performs uniform random binning on the indices in Mn, i.e., for each jMn, it selects an index uniformly at random from the set M˜n:=enR|U||W|log(n+1)n. Denote the random binning function by fB and a realization of it by fB. If IP(U;W)+η+|U||W|log(n+1)nR, set fB as the identity function with probability one, i.e., fB(j)=j. If unT[PU]δn, then the observer outputs the message m=(t,fB(j)) if IP(U;W)+η+|U||W|log(n+1)n>R or m=(t,j) otherwise, where j[Mn] is chosen randomly with probability PEu(Bn)(j|un) and t denotes the index of the joint type of (un,wn(j)) in the set of types Pn(U×W). If unT[PU]δn, the observer outputs the error message M=0. Please note that |M|enR since the total number of types in Pn(U×W) is upper bounded by (n+1)|U||W| ([60], Lemma 2.2). Let Cn:=(Bn,fB), and let Cn=(Bn,fB) and μn(·) denote its realization and probability distribution, respectively. For a given Cn, let fn(Cn) represent the encoder induced by the above operations, where fn(Cn):UnP(M) and M:=[enR].

Decoding: If M=0 or tT[PUW]δn, H^=1 is declared. Else, given m=(t,fB(j)) and Vn=vn, the detector decodes for a codeword w^n:=wn(j^)T[PW]δ^n in the codebook Bn such that

j^=arg minl:fB(l)=fB(j),wn(l)T[PW]δ^nHe(wn(l)|vn),ifIP(U;W)+η+1n|U||W|log(n+1)>R,j^=j,otherwise.

Denote the above decoding rule by PED(Cn), where PED(Cn):M×VnJ. The detector declares H^=0 if (w^n,vn)T[PWV]δ˜n and H^=1 otherwise. Let gn(Cn):M×VnH^ stand for the decision rule induced by the above operations.

System induced distributions and auxiliary distributions:

The system induced probability distribution when H=0 is given by

P˜(Cn,0)(sn,un,vn,j,wn,m,j^,w^n)=i=1nPSUV(si,ui,vi,zi)PEu(Bn)(j|un)1(wn(j)=wn)1(fB(j)=m)1j^=PED(Cn)(m,vn)
1(wn(j^)=w^n),if unT[PU]δn, (A21)

and

P˜(Cn,0)(sn,un,vn,m)=i=1nPSUV(si,ui,vi)1(m=0),if unT[PU]δn. (A22)

Consider two auxiliary distribution Ψ˜ and Ψ given by

Ψ˜(Cn,0)(sn,un,vn,j,wn,m,j^,w^n):=i=1nPSUV(si,ui,vi)PEu(Bn)(j|un)1(wn(j)=wn)1(fB(j)=m)1j^=PED(Cn)(m,vn)1(wn(j^)=w^n), (A23)

and

Ψ(Cn,0)(sn,un,vn,j,wn,m,j^,w^n):=1Mn1(wn(j)=wn)i=1nPU|W(ui|wi)i=1nPVS|U(vi,si|ui)1(fB(j)=m)1j^=PED(Cn)(m,vn)1(wn(j^)=w^n). (A24)

Let P˜(Cn,1) and Ψ˜(Cn,1) denote probability distributions under H=1 defined by the R.H.S. of (A21)–(A23) with PSUV replaced by QSUV, and let Ψ(Cn,1) denote the R.H.S. of (A24) with PVS|U replaced by QVS|U. Please note that the encoder fn(Cn) is such that PEu(Bn)(j|un)=Ψ(Cn,0)(j|un) and hence, the only difference between the joint distribution Ψ(Cn,0) and Ψ˜(Cn,0) is the marginal distribution of Un. By the soft-covering lemma [62,64], it follows that for some γ1>0,

EμnΨUn(Cn,0)Ψ˜Un(Cn,0)enγ1(n)0. (A25)

Hence, from ([43], Property 2(d)), it follows that

EμnΨ(Cn,0)Ψ˜(Cn,0)enγ1. (A26)

Also, note that the only difference between the distributions P˜(Cn,0) and Ψ˜(Cn,0) is PEu(Bn) when UnT[PU]δn. Since

PUnT[PU]δn|H=0enΩ(δ), (A27)

it follows that

EμnP˜(Cn,0)Ψ˜(Cn,0)enΩ(δ). (A28)

Equations (A26) and (A28) together imply via ([43], Property 2(c)) that

EμnP˜(Cn,0)Ψ(Cn,0)enΩ(δ)+enγ1(n)0. (A29)

Please note that for l{0,1}, the joint distribution Ψ(Cn,l) satisfies

Si(wi(J),Vi)(M,wn(J),Vn,Si1),i[n]. (A30)

Also, since IP(U;W)+η>0, by the application of soft-covering lemma,

Eμni=1nPWΨWi(J)(Cn,l)|H=leγ3n(n)0,l=0,1, (A31)

for some γ3>0.

If QU=PU, then it again follows from the soft-covering lemma that

EμnΨUn(Cn,1)Ψ˜Un(Cn,1)eγ1n(n)0, (A32)

thereby implying that

EμnΨ(Cn,1)Ψ˜(Cn,1)eγ1n. (A33)

Also, note that the only difference between the distributions P˜(Cn,1) and Ψ˜(Cn,1) is PEu(Bn) when UnT[PU]δn. Since QU=PU implies PUnT[PU]δn|H=1enΩ(δ), it follows that

EμnP˜(Cn,1)Ψ˜(Cn,1)enΩ(δ). (A34)

Equations (A33) and (A34) together imply that

EμnP˜(Cn,1)Ψ(Cn,1)enΩ(δ)+eγ1n(n)0. (A35)

Let P¯P˜(Cn,0)=EμnPP˜(Cn,0) and P¯P˜(Cn,1)=EμnPP˜(Cn,1) denote the expected probability measure (random coding measure) induced by PMF’s P˜(Cn,0) and P˜(Cn,1), respectively. Then, note that from (A24), (A29), (A31) and the weak law of large numbers,

P¯P˜(Cn,0)Un,Wn(J)T[PUW]δn1enΩ(δ)(n)1. (A36)

Analysis of type I and type II error probabilities:

We analyze type I and type II error probabilities of the coding scheme mentioned above averaged over the random ensemble Cn.

Type I error probability:

Please note that a type I error occurs only if one of the following events occur:

ETE=(Un,Vn)T[PUV]δ¯n,ESE=TPnT[PUW]δn,EME=Vn,Wn(J)T[PVW]δ˜n,EDE={len(IP(U;W)+η),lJ:fB(l)=fB(J),Wn(l)T[PW]δ^n,HeWn(l)|VnHeWn(J)|Vn}.

Let E:=ETEESEEMEEDE. Then, the expected type I error probability over Cn be upper bounded as

Eμnαnfn(Cn),gn(Cn)P¯P˜(Cn,0)(E). (A37)

Please note that P¯P˜(Cn,0)(ETE) tends to 0 asymptotically by the weak law of large numbers. From (A36), P¯P˜(Cn,0)(ESE)(n)0. Given ESEc and ETEc holds, it follows from the Markov chain relation VUW and the Markov lemma [68] that P¯P˜(Cn,0)(EME)(n)0. Also, as in the proof of Theorem 2 in [13], it follows that

P¯P˜(Cn,0)(EDE|Vn=vn,Wn(J)=wn,EMEcESEcETEc)enRIP(U;W|V)δn(1), (A38)

where δn(1)(n)η+O(δ). Thus, if R>IP(U;W|V), it follows by choosing η=O(δ) that for δ>0 small enough, the R.H.S. of (A38) tends to zero asymptotically. By the union bound on probability, the R.H.S. of (A37) tends to zero.

Type II error probability:

Let δ=|W|δ˜. Please note that a type II error occurs only if VnT[PV]δn and M0, i.e., UnT[PU]δn and TT[PUW]δn. Hence, we can restrict the type II error analysis to only such (Un,Vn). Denoting the event that a type II error occurs by D0, we have

Eμnβnfn(Cn),gn(Cn)=un,vnP¯P˜(Cn,1)(Un=un,Vn=vn)P¯P˜(Cn,1)(D0|Un=un,Vn=vn). (A39)

Let ENE:=ESEcVnT[V]δnUnT[U]δn. The last term in (A39) can be upper bounded as follows:

P¯P˜(Cn,1)(D0|Un=un,Vn=vn)=P¯P˜(Cn,1)(ENE|Un=un,Vn=vn)P¯P˜(Cn,1)(D0|Un=un,Vn=vn,ENE)P¯P˜(Cn,1)(D0|Un=un,Vn=vn,ENE)=j,m˜P¯P˜(Cn,1)(J=j,fB(J)=m˜|Un=un,Vn=vn,ENE)P¯P˜(Cn,1)(D0|Un=un,Vn=vn,J=j,fB(J)=m˜,ENE) (A40)
=P¯P˜(Cn,1)(D0|Un=un,Vn=vn,J=1,fB(J)=1,ENE)=wnWnP¯P˜(Cn,1)(Wn(1)=wn|Un=un,Vn=vn,J=1,fB(J)=1,ENE) (A41)
P¯P˜(Cn,1)(D0|Un=un,Vn=vn,J=1,fB(J)=1,Wn(1)=wn,ENE). (A42)

where (A41) follows since the term in (A40) is independent of the indices (j,m˜) due to the symmetry of the codebook generation, encoding and decoding procedure. The first term in (A42) can be upper bounded as

P¯P˜(Cn,1)(Wn(1)=wn|Un=un,Vn=vn,J=1,fB(J)=1,ENE)1|TPW˜|U˜|en(H(W˜|U˜)1n|U||W|log(n+1)). (A43)

To obtain (A43), we used the fact that PEu(Bn)(1|un) in (A20) is invariant to the joint type PU˜W˜ of (Un,Wn(1))=(un,wn) (keeping all the other codewords fixed). This in turn implies that given ENE, each sequence in the conditional type class TPW˜|U˜(un) is equally likely (in the randomness induced by Bn and stochastic encoding in (A20)) and its probability is upper bounded by 1|TPW˜|U˜|. Defining the events

EBE:=lMn,lJ,fB(l)=M,Wn(l))T[PW]δ^n,(Vn,Wn(l))T[PVW]δ˜n, (A44)
F:={Un=un,Vn=vn,J=1,fB(J)=1,Wn(1)=wn,ENE}, (A45)
F1:={Un=un,Vn=vn,J=1,fB(J)=1,Wn(1)=wn,ENE,EBEc}, (A46)
and F2:={Un=un,Vn=vn,J=1,fB(J)=1,Wn(1)=wn,ENE,EBE}, (A47)

the last term in (A42) can be written as

P¯P˜(Cn,1)(D0|F)=P¯P˜(Cn,1)(EBEc|F)P¯P˜(Cn,1)(D0|F1)+P¯P˜(Cn,1)(EBE|F)P¯P˜(Cn,1)(D0|F2). (A48)

The analysis of the terms in (A48) is essentially similar to that given in the proof of Theorem 2 in [13], except for a subtle difference that we mention next. To bound the binning error event EBE, we require an upper bound similar to

P¯P˜(Cn,1)Wn(l)=w˜n|F2P¯P˜(Cn,1)(Wn(l)=w˜n),w˜nWn, (A49)

that is used in the proof of Theorem 2 in [13]. Please note that the stochastic encoding scheme considered here is different from the encoding scheme in [13]. In place (A49), we will show that for l1,

P¯P˜(Cn,1)(Wn(l)=w˜n|F)3P¯P˜(Cn,1)(Wn(l)=w˜n), (A50)

which suffices for the proof. Please note that

P¯P˜(Cn,1)(Wn(l)=w˜n|F)=P¯P˜(Cn,1)(Wn(l)=w˜n|Un=un,Vn=vn)P¯P˜(Cn,1)(Wn(1)=wn|Wn(l)=w˜n,Un=un,Vn=vn)P¯P˜(Cn,1)(Wn(1)=wn|Un=un,Vn=vn)P¯P˜(Cn,1)(J=1|Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn)P¯P˜(Cn,1)(J=1|Wn(1)=wn,Un=un,Vn=vn)P¯P˜(Cn,1)(fB(J)=1|J=1,Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn)P¯P˜(Cn,1)(fB(J)=1|J=1,Wn(1)=wn,Un=un,Vn=vn) (A51)
P¯P˜(Cn,1)(ENE|fB(J)=1,J=1,Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn)P¯P˜(Cn,1)(ENE|fB(J)=1,J=1,Wn(1)=wn,Un=un,Vn=vn) (A52)

Since the codewords are generated independently of each other and the binning operation is done independent of the codebook generation, we have

P¯P˜(Cn,1)(Wn(1)=wn|Wn(l)=w˜n,Un=un,Vn=vn)=P¯P˜(Cn,1)(Wn(1)=wn|Un=un,Vn=vn), (A53)

and

P¯P˜(Cn,1)(fB(J)=1|J=1,Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn)=P¯P˜(Cn,1)(fB(J)=1|J=1,Wn(1)=wn,Un=un,Vn=vn). (A54)

Also, note that

P¯P˜(Cn,1)(ENE|fB(J)=1,J=1,Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn)=P¯P˜(Cn,1)(ENE|fB(J)=1,J=1,Wn(1)=wn,Un=un,Vn=vn). (A55)

Next, consider the term in (A51). Let

F:={Wn(1)=wn,Un=un,Vn=vn},F:={Wn(1)=wn,Wn(l)=w˜n,Un=un,Vn=vn}.

Then, the numerator and denominator of (A51) can be written as

P¯P˜(Cn,1)(J=1|F)=Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+i=1nPU|W(ui|w˜i)+j1,li=1nPU|W(ui|Wi(j))Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j)), (A56)

and

P¯P˜(Cn,1)(J=1|F)=Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1i=1nPU|W(ui|Wi(j)), (A57)

respectively. The R.H.S. of (A56) (resp. (A57)) denote the average probability that J=1 is chosen by PEu(Bn) given Wn(1)=wn, Un=un and Mn2 (resp. Mn1) other independent codewords in Bn. Let

El:=i=1nPU|W(ui|Wi(l))maxi=1nPU|W(ui|Wi(j)),jMn{1}i=1nPU|W(ui|wi).

Please note that

Eμn|Elci=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1i=1nPU|W(ui|Wi(j))12Eμn|Elci=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j)). (A58)

Hence, denoting by μ¯n the probability measure induced by μn, we have

P¯P˜(Cn,1)(J=1|F)P¯P˜(Cn,1)(J=1|F)Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1i=1nPU|W(ui|Wi(j))Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))μ¯n(Elc)Eμn|Elci=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1i=1nPU|W(ui|Wi(j))Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12μ¯n(Elc)Eμn|Elci=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j)) (A59)
=Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12μ¯n(El)Eμn|Eli=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j)) (A60)
Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12μ¯n(El) (A61)
Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))12Eμni=1nPU|W(ui|wi)i=1nPU|W(ui|wi)+j1,li=1nPU|W(ui|Wi(j))een(IP(U;W)+η) (A62)
2+o(1)3, (A63)

where (A59) is due to (A58); (A61) is since the term within Eμn|El[·] in (A60) is upper bounded by one; (A62) is since μ¯n(El)een(IP(U;W)+η) for some η>0 which follows similar to ([68], Section 3.6.3), and (A63) follows since the term within the expectation which is exponential in order dominates the double exponential term. From (A52)–(A55), (A63) and (A50) follows. The analysis of the other terms in (A48) is the same as in the SHA scheme in [7], and results in the error exponent (within an additive O(δ) term) claimed in the Theorem. We refer the reader to ([13], Theorem 2) for a detailed proof (In [13], the communication channel between the observer and the detector is a DMC. However, since the coding scheme used in the achievability part of Theorem 2 in [13] is a separation-based scheme, the error exponent when the channel is noiseless can be recovered by setting E3(·) and E4(·) in Theorem 2 to ). By the random coding argument followed by the standard expurgation technique [72] (see ([13], Proof of Theorem 2)), there exists a deterministic codebook and binning function pair Cn=(Bn,fB) such that the type I and type II error probabilities are within a constant multiplicative factor of their average values over the random ensemble Cn, and

Si(wi(J),Vi)(M,wn(J),Vn,Si1),i[n], (A64)
P˜(Cn,0)Ψ(Cn,0)eγ4n, (A65)
P˜(Cn,1)Ψ(Cn,1)eγ4n,ifQU=PU, (A66)
and i=1nPWΨwi(J)(Cn,l)eγ5n,l=0,1, (A67)

where γ4 and γ5 are some positive numbers. Since the average type I error probability for our scheme tends to zero asymptotically, and the error exponent is unaffected by a constant multiplicative scaling of the type II error probability, this codebook achieves the same type I error probability and error exponent as the average over the random ensemble. Using this deterministic codebook for encoding and decoding, we first lower bound the equivocation and average distortion of Sn at the detector as follows:

First consider the equivocation of Sn under the null hypothesis.

HP˜(Cn,0)(Sn|M,Vn)PP˜(Cn,0)(M0)H(Sn|M0,Vn)
(1enΩ(δ))HP˜(Cn,0)(Sn|M0,Vn) (A68)
(1enΩ(δ))HP˜(Cn,0)(Sn|wn(J),Vn) (A69)
=(1enΩ(δ))HP˜(Cn,0)(Sn|wn(J),Vn) (A70)
(1enΩ(δ))HΨ(Cn,0)(Sn|wn(J),Vn)2eγ4nlog|S|n|V|neγ4n (A71)
=i=1nHΨ(Cn,0)(Si|wi(J),Vi)enΩ(δ)i=1nHΨ(Cn,0)(Si|wi(J),Vi)o(1) (A72)
i=1nHΨ(Cn,0)(Si|wi(J),Vi)nenΩ(δ)HP(S|V)o(1) (A73)
=i=1nHΨ(Cn,0)(Si|wi(J),Vi)o(1) (A74)
=nHP(S|W,V)o(1). (A75)

Here, (A68) follows from (A27); (A69) follows since M is a function of wn(J) for a deterministic codebook; (A71) follows from (A65) and Lemma 3; (A72) follows from (A24); and (A75) follows from (A67) and ΨSiVi|wi(0)=PSV|W(0),i[n].

If QU=PU, it follows similarly to above that

HP˜(Cn,1)(Sn|M,Vn)1enΩ(δ)HΨ(Cn,1)(Sn|wn(J),Vn)2eγ4nlog|S|n|V|neγ4n (A76)
=i=1nHΨ(Cn,1)(Si|wi(J),Vi)enΩ(δ)i=1nHΨ(Cn,1)(Si|wi(J),Vi)o(1) (A77)
i=1nHΨ(Cn,1)(Si|wi(J),Vi)nenΩ(δ)HQ(S|V)o(1) (A78)
=i=1nHΨ(Cn,1)(Si|wi(J),Vi)o(1) (A79)
=nHQ(S|W,V)o(1). (A80)

Finally, consider the case H=1 and QUPU. We have for δ small enough that

PP˜(Cn,1)M=0=PP˜(Cn,1)UnT[PU]δn1en(D(PU||QU)O(δ))(n)1. (A81)

Hence, for δ small enough, we can write

HP˜(Cn,1)(Sn|M,Vn)HP˜(Cn,1)(Sn|M,Vn,Π(Un,δ,PU))
1en(D(PU||QU)O(δ))HP˜(Cn,1)(Sn|M,Vn,Π(Un,δ,PU)=1) (A82)
=1en(D(PU||QU)O(δ))HP˜(Cn,1)(Sn|Vn,Π(Un,δ,PU)=1) (A83)
1en(D(PU||QU)O(δ))HP˜(Cn,1)(Sn|Vn)o(1) (A84)
=nHQ(S|V)nen(D(PU||QU)O(δ))HQ(S|V)o(1)=nHQ(S|V)o(1). (A85)

Here, (A82) follows from (A81); (A83) follows since Π(Un,δ,PU)=1 implies M=0; (A84) follows from Lemma 3 and (15). Thus, since δ>0 is arbitrary, we have shown that for ϵ(0,1), (R,κ,Λ0,Λ1)Re(ϵ) if (18)–(21) holds.

On the other hand, average distortion of Sn at the detector can be lower bounded under H=0 as follows:

mingi,n(r)EdSn,S^n|H=0
=minϕ¯i,n(m,vn,si1)i=1nEP˜(Cn,0)i=1ndSi,ϕ¯i(m,vn,si1) (A86)
minϕ¯i(m,vn,si1)i=1nEΨ(Cn,0)i=1nd(Si,ϕ¯i(m,vn,si1))nenγ4Dm (A87)
minϕ¯i(·,·)i=1nEΨ(Cn,0)i=1nd(Si,ϕ¯i(wi(J),Vi))nenγ4Dm (A88)
nminϕ(·,·)EPd(S,ϕ(W,V))nenγ4+enγ5Dm (A89)
=nminϕ(·,·)i=1nEPd(S,ϕ(W,V))o(1). (A90)

Here, (A86) follows from Lemma 2; (A87) follows from ([43], Property 2(b)) due to (A65) and boundedness of distortion measure; (A88) follows from the Markov chain in (A64); (A89) follows from (A67) and the fact that ΨSiVi|wi(J)(0)=PSV|W(0),i[n].

Next, consider the case H=1 and QU=PU. Then, similarly to above, we can write

mingi,n(r)EdSn,S^n|H=1=minϕ¯i(m,vn,si1)i=1nEP˜(Cn,1)i=1ndSi,ϕi(M,Vn,Si1)minϕ¯i(m,vn,si1)i=1nEΨ(Cn,1)i=1nd(Si,ϕi(M,Vn,Si1))nenγ4Dm (A91)
minϕi(·,·)i=1nEΨ(Cn,1)i=1nd(Si,ϕi(wi,Vi))nenγ4Dm (A92)
nminϕ(·,·)i=1nEQd(S,ϕ(W,V))n(enγ4+enγ5)Dm. (A93)
=nminϕ(·,·)i=1nEQd(S,ϕ(W,V))o(1). (A94)

If H=1 and QUPU, we have

mingi,n(r)EdSn,S^n|H=1PP˜(Cn,1)M=0|H=1minϕ¯i(m,vn,si1)i=1ni=1nEP˜(Cn,1)dSi,ϕi(0,Vn,Si1)PP˜(Cn,1)M=0|H=1minϕi(v)i=1nEQi=1nd(Si,ϕi(Vi))Dmo(1) (A95)
=nminϕ(·)EQd(S,ϕ(V))o(1). (A96)

Here, (A96) follows from (15) in Lemma 4 and (A96) follows from (A81). Thus, since δ>0 is arbitrary, we have shown that (R,κ,Δ0,Δ1)Rd(ϵ), ϵ(0,1), provided that (18), (19), (24) and (25) are satisfied. This completes the proof of the theorem.

Appendix E. Proof of Lemma 5

Consider the |U|+2 functions of PU|W,

PU(ui)=wWPW(w)PU|W(ui|w),i=1,2,,|U|1, (A97)
HP(U|W,Z)=wPW(w)g1(PU|W,w), (A98)
HP(Y|W,Z)=wPW(w)g2(PU|W,w), (A99)
HP(S|W,Y,Z)=wPW(w)g3(PU|W,w), (A100)

where

g1(PU|W,w)=u,zPU|W(u|w)PZ|U(z|u)logPU|W(u|w)PZ|U(z|u)uPU|W(u|w)PZ|U(z|u),g2(PU|W,w)=y,z,uPU|W(u|w)PYZ|U(y,z|u)loguPU|W(u|w)PYZ|U(y,z|u)uPU|W(u|w)PZ|U(z|u),g3(PU|W,w)=s,y,z,uPU|W(u|w)PSYZ|U(s,y,z|u)loguPU|W(u|w)PSYZ|U(s,y,z|u)uPU|W(u|w)PYZ|U(y,z|u).

Thus, by the Fenchel–Eggleston–Carathéodory’s theorem [68], it is sufficient to have at most |U|1 points in the support of W to preserve PU and three more to preserve HP(U|W,Z), HP(Y|W,Z) and HP(S|W,Z,Y). Noting that HP(Y|Z) and HP(U|Z) are automatically preserved since PU is preserved (and (Y,Z,S)UW holds), |W|=|U|+2 points are sufficient to preserve the R.H.S. of Equations (28)–(30). This completes the proof for the case of Re. Similarly, considering the |U|+1 functions of PW|U given in (A97)–(A99) and

EPdS,ϕ(W,Y,Z)=wPW(w)g4(w,PW|U),

where

g4(w,PW|U)=s,u,y,zPU|W(u|w)PYZS|U(y,z,s|u)d(s,ϕ(w,y,z)),

similar result holds also for the case of Rd.

Author Contributions

Conceptualization, S.S., A.C. and D.G.; writing—original draft preparation, S.S.; supervision, A.C. and D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the European Research Council Starting Grant project BEACON (grant agreement number 677854).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Appari A., Johnson E. Information security and privacy in healthcare: Current state of research. Int. J. Internet Enterp. Manag. 2010;6:279–314. doi: 10.1504/IJIEM.2010.035624. [DOI] [Google Scholar]
  • 2.Gross R., Acquisti A. Information revelation and privacy in online social networks; Proceedings of the ACM workshop on Privacy in Electronic Society; Alexandria, VA, USA. 7 November 2005; pp. 71–80. [Google Scholar]
  • 3.Miyazaki A., Fernandez A. Consumer Perceptions of Privacy and Security Risks for Online Shopping. J. Consum. Aff. 2001;35:27–44. doi: 10.1111/j.1745-6606.2001.tb00101.x. [DOI] [Google Scholar]
  • 4.Giaconi G., Gündüz D., Poor H.V. Privacy-Aware Smart Metering: Progress and Challenges. IEEE Signal Process. Mag. 2018;35:59–78. doi: 10.1109/MSP.2018.2841410. [DOI] [Google Scholar]
  • 5.Ahlswede R., Csiszár I. Hypothesis Testing with Communication Constraints. IEEE Trans. Inf. Theory. 1986;32:533–542. doi: 10.1109/TIT.1986.1057194. [DOI] [Google Scholar]
  • 6.Han T.S. Hypothesis Testing with Multiterminal Data Compression. IEEE Trans. Inf. Theory. 1987;33:759–772. doi: 10.1109/TIT.1987.1057383. [DOI] [Google Scholar]
  • 7.Shimokawa H., Han T.S., Amari S. Error Bound of Hypothesis Testing with Data Compression; Proceedings of the IEEE International Symposium on Information Theory; Trondheim, Norway. 27 June–1 July 1994. [Google Scholar]
  • 8.Shalaby H.M.H., Papamarcou A. Multiterminal Detection with Zero-Rate Data Compression. IEEE Trans. Inf. Theory. 1992;38:254–267. doi: 10.1109/18.119685. [DOI] [Google Scholar]
  • 9.Zhao W., Lai L. Distributed Testing Against Independence with Multiple Terminals; Proceedings of the 52nd Annual Allerton Conference; Monticello, IL, USA. 30 September–3 October 2014; pp. 1246–1251. [Google Scholar]
  • 10.Katz G., Piantanida P., Debbah M. Distributed Binary Detection with Lossy Data Compression. IEEE Trans. Inf. Theory. 2017;63:5207–5227. doi: 10.1109/TIT.2017.2688348. [DOI] [Google Scholar]
  • 11.Rahman M.S., Wagner A.B. On the Optimality of Binning for Distributed Hypothesis Testing. IEEE Trans. Inf. Theory. 2012;58:6282–6303. doi: 10.1109/TIT.2012.2206793. [DOI] [Google Scholar]
  • 12.Sreekumar S., Gündüz D. Distributed Hypothesis Testing Over Noisy Channels; Proceedings of the IEEE International Symposium on Information Theory; Aachen, Germany. 25–30 June 2017; pp. 983–987. [Google Scholar]
  • 13.Sreekumar S., Gündüz D. Distributed Hypothesis Testing Over Discrete Memoryless Channels. IEEE Trans. Inf. Theory. 2020;66:2044–2066. doi: 10.1109/TIT.2019.2953750. [DOI] [Google Scholar]
  • 14.Salehkalaibar S., Wigger M., Timo R. On Hypothesis Testing Against Conditional Independence with Multiple Decision Centers. IEEE Trans. Commun. 2018;66:2409–2420. doi: 10.1109/TCOMM.2018.2798659. [DOI] [Google Scholar]
  • 15.Salehkalaibar S., Wigger M. Distributed Hypothesis Testing based on Unequal-Error Protection Codes. arXiv. 2018 doi: 10.1109/TIT.2020.2993172.1806.05533 [DOI] [Google Scholar]
  • 16.Han T.S., Kobayashi K. Exponential-Type Error Probabilities for Multiterminal Hypothesis Testing. IEEE Trans. Inf. Theory. 1989;35:2–14. doi: 10.1109/18.42171. [DOI] [Google Scholar]
  • 17.Haim E., Kochman Y. On Binary Distributed Hypothesis Testing. arXiv. 20181801.00310 [Google Scholar]
  • 18.Weinberger N., Kochman Y. On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection. IEEE Trans. Inf. Theory. 2019;65:4940–4965. doi: 10.1109/TIT.2019.2910065. [DOI] [Google Scholar]
  • 19.Bayardo R., Agrawal R. Data privacy through optimal k-anonymization; Proceedings of the International Conference on Data Engineering; Tokyo, Japan. 5–8 April 2005; pp. 217–228. [Google Scholar]
  • 20.Agrawal R., Srikant R. Privacy-preserving data mining; Proceedings of the ACM SIGMOD International Conference on Management of Data; Dallas, TX, USA. 18–19 May 2000; pp. 439–450. [Google Scholar]
  • 21.Bertino E. Big Data-Security and Privacy; Proceedings of the IEEE International Congress on BigData; New York, NY, USA. 27 June–2 July 2015; pp. 425–439. [Google Scholar]
  • 22.Gertner Y., Ishai Y., Kushilevitz E., Malkin T. Protecting Data Privacy in Private Information Retrieval Schemes. J. Comput. Syst. Sci. 2000;60:592–629. doi: 10.1006/jcss.1999.1689. [DOI] [Google Scholar]
  • 23.Hay M., Miklau G., Jensen D., Towsley D., Weis P. Resisting structural re-identification in anonymized social networks. J. Proc. VLDB Endow. 2008;1:102–114. doi: 10.14778/1453856.1453873. [DOI] [Google Scholar]
  • 24.Narayanan A., Shmatikov V. De-anonymizing Social Networks; Proceedings of the IEEE Symposium on Security and Privacy; Berkeley, CA, USA. 17–20 May 2009. [Google Scholar]
  • 25.Liao J., Sankar L., Tan V., Calmon F. Hypothesis Testing Under Mutual Information Privacy Constraints in the High Privacy Regime. IEEE Trans. Inf. Forensics Secur. 2018;13:1058–1071. doi: 10.1109/TIFS.2017.2779108. [DOI] [Google Scholar]
  • 26.Liao J., Sankar L., Calmon F., Tan V. Hypothesis testing under maximal leakage privacy constraints; Proceedings of the IEEE International Symposium on Information Theory; Aachen, Germany. 25–30 June 2017. [Google Scholar]
  • 27.Gilani A., Amor S.B., Salehkalaibar S., Tan V. Distributed Hypothesis Testing with Privacy Constraints. Entropy. 2019;21:478. doi: 10.3390/e21050478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Gündüz D., Erkip E., Poor H.V. Secure lossless compression with side information; Proceedings of the IEEE Information Theory Workshop; Porto, Portugal. 5–9 May 2008; pp. 169–173. [Google Scholar]
  • 29.Gündüz D., Erkip E., Poor H.V. Lossless compression with security constraints; Proceedings of the IEEE International Symposium on Information Theory; Toronto, ON, Canada. 6–11 July 2008; pp. 111–115. [Google Scholar]
  • 30.Mhanna M., Piantanida P. On secure distributed hypothesis testing; Proceedings of the IEEE International Symposium on Information Theory; Hong Kong, China. 14–19 June 2015; pp. 1605–1609. [Google Scholar]
  • 31.Sweeney L. K-anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2002;10:557–570. doi: 10.1142/S0218488502001648. [DOI] [Google Scholar]
  • 32.Dwork C., McSherry F., Nissim K., Smith A. Theory of Cryptography. Springer; Berlin/Heidelberg, Germany: 2006. Calibrating Noise to Sensitivity in Private Data Analysis; pp. 265–284. [Google Scholar]
  • 33.Calmon F., Fawaz N. Privacy Against Statistical Inference; Proceedings of the 50th Annual Allerton Conference; Illinois, IL, USA. 1–5 October 2012; pp. 1401–1408. [Google Scholar]
  • 34.Makhdoumi A., Salamatian S., Fawaz N., Medard M. From the information bottleneck to the privacy funnel; Proceedings of the IEEE Information Theory Workshop; Hobart, Australia. 2–5 November 2014; pp. 501–505. [Google Scholar]
  • 35.Calmon F., Makhdoumi A., Medard M. Fundamental limits of perfect privacy; Proceedings of the IEEE International Symposium on Information Theory; Hong Kong, China. 14–19 June 2015; pp. 1796–1800. [Google Scholar]
  • 36.Issa I., Kamath S., Wagner A.B. An Operational Measure of Information Leakage; Proceedings of the Annual Conference on Information Science and Systems; Princeton, NJ, USA. 16–18 March 2016; pp. 1–6. [Google Scholar]
  • 37.Rassouli B., Gündüz D. Optimal Utility-Privacy Trade-off with Total Variation Distance as a Privacy Measure. IEEE Trans. Inf. Forensics Secur. 2019;15:594–603. doi: 10.1109/TIFS.2019.2903658. [DOI] [Google Scholar]
  • 38.Wagner I., Eckhoff D. Technical Privacy Metrics: A Systematic Survey. arXiv. 2015 doi: 10.1145/3168389.1512.00327v1 [DOI] [Google Scholar]
  • 39.Goldwasser S., Micali S. Probabilistic encryption. J. Comput. Syst. Sci. 1984;28:270–299. doi: 10.1016/0022-0000(84)90070-9. [DOI] [Google Scholar]
  • 40.Bellare M., Tessaro S., Vardy A. Semantic Security for the Wiretap Channel; Proceedings of the Advances in Cryptology-CRYPTO 2012; Heidelberg, Germany. 19–23 August 2012; pp. 294–311. [Google Scholar]
  • 41.Yamamoto H. A Rate-Distortion Problem for a Communication System with a Secondary Decoder to be Hindered. IEEE Trans. Inf. Theory. 1988;34:835–842. doi: 10.1109/18.9781. [DOI] [Google Scholar]
  • 42.Tandon R., Sankar L., Poor H.V. Discriminatory Lossy Source Coding: Side Information Privacy. IEEE Trans. Inf. Theory. 2013;59:5665–5677. doi: 10.1109/TIT.2013.2259613. [DOI] [Google Scholar]
  • 43.Schieler C., Cuff P. Rate-Distortion Theory for Secrecy Systems. IEEE Trans. Inf. Theory. 2014;60:7584–7605. doi: 10.1109/TIT.2014.2365175. [DOI] [Google Scholar]
  • 44.Agarwal G.K. Ph.D. Thesis. University of California; Los Angeles, CA, USA: 2019. [(accessed on 3 January 2020)]. On Information Theoretic and Distortion-based Security. Available online: https://escholarship.org/uc/item/7qs7z91g. [Google Scholar]
  • 45.Li Z., Oechtering T., Gündüz D. Privacy against a hypothesis testing adversary. IEEE Trans. Inf. Forensics Secur. 2019;14:1567–1581. doi: 10.1109/TIFS.2018.2882343. [DOI] [Google Scholar]
  • 46.Cuff P., Yu L. Differential privacy as a mutual information constraint; Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security; Vienna, Austria. 24–28 October 2016; pp. 43–54. [Google Scholar]
  • 47.Goldfeld Z., Cuff P., Permuter H.H. Semantic-Security Capacity for Wiretap Channels of Type II. IEEE Trans. Inf. Theory. 2016;62:3863–3879. doi: 10.1109/TIT.2016.2565483. [DOI] [Google Scholar]
  • 48.Sreekumar S., Bunin A., Goldfeld Z., Permuter H.H., Shamai S. The Secrecy Capacity of Cost-Constrained Wiretap Channels. arXiv. 20202004.04330 [Google Scholar]
  • 49.Kasiviswanathan S.P., Lee H.K., Nissim K., Raskhodnikova S., Smith A. What can we learn privately? SIAM J. Comput. 2011;40:793–826. doi: 10.1137/090756090. [DOI] [Google Scholar]
  • 50.Duchi J.C., Jordan M.I., Wainwright M.J. Local Privacy and Statistical Minimax Rates; Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science; Berkeley, CA, USA. 26–29 October 2013; pp. 429–438. [Google Scholar]
  • 51.Duchi J.C., Jordan M.I., Wainwright M.J. Privacy Aware Learning. J. ACM. 2014;61:1–57. doi: 10.1145/2666468. [DOI] [Google Scholar]
  • 52.Wang Y., Lee J., Kifer D. Differentially Private Hypothesis Testing, Revisited. arXiv. 20151511.03376 [Google Scholar]
  • 53.Gaboardi M., Lim H., Rogers R., Vadhan S. Differentially Private Chi-Squared Hypothesis Testing: Goodness of Fit and Independence Testing; Proceedings of the 33rd International Conference on Machine Learning; New York City, NY, USA. 19–24 June 2016; pp. 2111–2120. [Google Scholar]
  • 54.Rogers R.M., Roth A., Smith A.D., Thakkar O. Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing. arXiv. 20161604.03924 [Google Scholar]
  • 55.Cai B., Daskalakis C., Kamath G. Priv’IT: Private and Sample Efficient Identity Testing; Proceedings of the 34th International Conference on Machine Learning; Sydney, Australia. 6–11 August 2017; pp. 635–644. [Google Scholar]
  • 56.Sheffet O. Locally Private Hypothesis Testing; Proceedings of the 35th International Conference on Machine Learning; Stockholm, Sweden. 10–15 July 2018; pp. 4605–4614. [Google Scholar]
  • 57.Acharya J., Sun Z., Zhang H. Advances in Neural Information Processing Systems 31. Curran Associates Inc.; Red Hook, NY, USA: 2018. Differentially Private Testing of Identity and Closeness of Discrete Distributions; pp. 6878–6891. [Google Scholar]
  • 58.Canonne C.L., Kamath G., McMillan A., Smith A., Ullman J. The Structure of Optimal Private Tests for Simple Hypotheses; Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing; Phoenix, Arizona. 23–26 June 2019; pp. 310–321. [Google Scholar]
  • 59.Aliakbarpour M., Diakonikolas I., Kane D., Rubinfeld R. Advances in Neural Information Processing Systems 32. Curran Associates Inc.; Red Hook, NY, USA: 2019. Private Testing of Distributions via Sample Permutations; pp. 10878–10889. [Google Scholar]
  • 60.Csiszár I., Körner J. Information Theory: Coding Theorems for Discrete Memoryless Systems. Cambridge University Press; Cambridge, UK: 2011. [Google Scholar]
  • 61.Wang Y., Basciftci Y.O., Ishwar P. Privacy-Utility Tradeoffs under Constrained Data Release Mechanisms. arXiv. 20171710.09295 [Google Scholar]
  • 62.Cuff P. Distributed Channel Synthesis. IEEE Trans. Inf. Theory. 2013;59:7071–7096. doi: 10.1109/TIT.2013.2279330. [DOI] [Google Scholar]
  • 63.Song E.C., Cuff P., Poor H.V. The Likelihood Encoder for Lossy Compression. IEEE Trans. Inf. Theory. 2016;62:1836–1849. doi: 10.1109/TIT.2016.2529657. [DOI] [Google Scholar]
  • 64.Wyner A.D. The Common Information of Two Dependent Random Variables. IEEE Trans. Inf. Theory. 1975;21:163–179. doi: 10.1109/TIT.1975.1055346. [DOI] [Google Scholar]
  • 65.Han T.S., Verdú S. Approximation Theory of Output Statistics. IEEE Trans. Inf. Theory. 1993;39:752–772. doi: 10.1109/18.256486. [DOI] [Google Scholar]
  • 66.Sreekumar S., Gündüz D., Cohen A. Distributed Hypothesis Testing Under Privacy Constraints; Proceedings of the IEEE Information Theory Workshop (ITW); Guangzhou, China. 25–29 November 2018; pp. 1–5. [Google Scholar]
  • 67.Tishby N., Pereira F., Bialek W. The Information Bottleneck Method. arXiv. 2000physics/0004057 [Google Scholar]
  • 68.Gamal A.E., Kim Y.H. Network Information Theory. Cambridge University Press; Cambridge, UK: 2011. [Google Scholar]
  • 69.Polyanskiy Y. Ph.D. Thesis. Princeton University; Princeton, NJ, USA: 2010. Channel Coding: Non-Asymptotic Fundamental Limits. [Google Scholar]
  • 70.Yang W., Caire G., Durisi G., Polyanskiy Y. Optimum Power Control at Finite Blocklength. IEEE Trans. Inf. Theory. 2015;61:4598–4615. doi: 10.1109/TIT.2015.2456175. [DOI] [Google Scholar]
  • 71.Villard J., Piantanida P. Secure Multiterminal Source Coding With Side Information at the Eavesdropper. IEEE Trans. Inf. Theory. 2013;59:3668–3692. doi: 10.1109/TIT.2013.2245394. [DOI] [Google Scholar]
  • 72.Gallager R.G. A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory. 1965;11:3–18. doi: 10.1109/TIT.1965.1053730. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES