Abstract
Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuous generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy loss of a DP mechanism may accumulate and increase over time. We call it temporal privacy leakage. Second, to measure such privacy loss, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective.
I. Introduction
With the development of wearable and mobile devices, vast amount of temporal data generated by individuals are being collected, such as trajectories and web page click streams. The continuous publication of statistics from these temporal data has the potential for significant social benefits such as disease surveillance [4], real-time traffic monitoring [18] and web mining [26]. However, privacy concerns hinder the wider use of these data. To this end, differential privacy under continual observation [1] [3] [8] [13] [15] [17] [22] [36] has received increasing attention because it provides a rigorous privacy guarantee. Intuitively, differential privacy (DP) [12] ensures that the modification of any single user’s data in the database has a “slight” (bounded in ε) impact on the change in outputs. The parameter ε is defined to be a positive real number to control the privacy level. Larger values of ε result in larger privacy leakage.
However, most existing works on differentially private continuous aggregate release has an implicit assumption of data independence, i.e., there is no correlation between the data. Recent studies [23] [24] [25] point out that traditional DP may not guarantee the expected privacy on correlated data. The following example shows that the temporal correlations may degrade the expected privacy guarantee of DP.
Example 1
Consider the scenario of continuous aggregate release illustrated in Figure 1. A trusted server collects users’ locations at each time point in Figure 1(a) and continuously publishes aggregate (i.e., the counts of people at each location) in Figure 1(c) with differential privacy. Our goal is to achieve ε-DP at each time point t (event-level ε-DP [13] [15]) where t ∈ [1, T]. Suppose that each user appears at only one location at each time point. According to the Laplace mechanism [14], adding Lap(1/ε) noise1 to perturb each count in Figure 1(c) can achieve ε-DP at each time point. However, the expected privacy guarantee may decay due to temporal correlations as follows. Using auxiliary information, such as road networks, an attacker may know users’ mobility patterns, such as “always arriving at loc5 after visiting loc4” (shown in Figure 1(b)), leading to the patterns illustrated in solid lines of Figure 1(c). The temporal correlation due to this road network can be represented as Pr(lt = loc5|lt−1 = loc4) = 1 where lt is the location of a user at time t. That is, given the previous counts of loc4, an attacker can derive the current count of loc5. Consequently, because an adversary can perform inference due to such correlations between the two consecutive time points (i.e., as if the same count is released two times), adding Lap(1/ε) noise to each count guarantees 2ε-DP at the time point. Furthermore, let us consider an extreme case of temporal correlation (e.g., a terrible traffic congestion) Pr(lt = loc4|lt−1 = loc4) = Pr(lt = loc5|lt−1 = loc5) = 1 (i.e., the counts of loc4 and loc5 will not change over time). Then, adding Lap(1/ε) noise to each count guarantees Tε-DP at time point T.
Fig. 1.

Differentially Private Continuous Aggregate Release under Temporal Correlations.
It is reasonable to consider that adversaries may obtain the temporal correlations, which commonly exist in our real life and are easily acquired from public information or historical data. In addition to road networks, there are countless factors that may cause temporal correlations such as the common patterns characterizing human activities [19] and weather conditions [20].
Few studies in the literature investigated such potential privacy loss of event-level ε-DP under temporal correlations as shown in Example 1. A direct method (without finely utilizing the probability of correlation) involves amplifying the perturbation in terms of group differential privacy [9] [14], i.e., protecting the correlated data as a group. In Example 1, for temporal correlation Pr(lt = loc5|lt−1 = loc4) = 1, we can protect the counts of loc4 at time t − 1 and loc5 at time t in a group (the sensitivity becomes 2) by increasing the scale of the perturbation to Lap(2/ε) at each time point; for temporal correlation Pr(lt = loci|lt−1 = loci) = 1, in order to guarantee ε-DP at time T, we need to add Lap(T/ε) noise at each time point because the privacy leakage accumulates over time. However, this technique is not suitable for probabilistic correlations to finely prevent privacy leakage and may over-perturb the data as a result. For example, regardless of whether Pr(lt = loci|lt−1 = loci) is 1 or 0.1, it always protects the correlated data in a bundle.
Although a few studies investigated the issue of differential privacy under probabilistic correlations, they are not applicable for continuous data release because of the different problem settings. The following two works focused on one-shot data release and different types of correlations. Yang et al. [37] proposed Bayesian differential privacy (BDP), which measures the privacy leakage under probabilistic correlations between tuples, modeled by a Gaussian Markov Random Field without taking time factor into account. Liu et al. [29] proposed dependent differential privacy by introducing two parameters of dependence size and probabilistic dependence relationship between tuples. However, it is not clear whether we can specify them for temporally correlated data. Another line of work [32] [34] [35] has investigated adversaries with knowledge of temporal correlations. They focused on designing new mechanisms for protecting a single user’s location privacy extending DP, whereas we attempt to quantify the potential privacy loss of a traditional DP mechanism in the context of continuous aggregate release.
We call the adversary considered in traditional DP with additional knowledge of probabilistic temporal correlations . Rigorously quantifying and bounding the privacy leakage against remains a challenge. Therefore, our goal is to solve the following problems in this paper:
How do we formalize and define the privacy loss against ? (Section III)
How do we calculate the privacy loss against ? (Section IV)
How do we bound the privacy loss against ? (Section V)
A. Contributions
In this work, for the first time, we quantify and bound the privacy leakage of a DP mechanism due to temporal correlations. Our contributions are summarized as follows.
First, we rigorously define with temporal correlations that are described by the commonly used Markov model. The temporal correlations include backward and forward correlations, i.e., and where denotes the value (e.g., location) of user i at time t. We then define Temporal Privacy Leakage (TPL) as the privacy loss of a DP mechanism at time t against . TPL includes two parts: Backward Privacy Leakage (BPL) and Forward Privacy leakage (FPL) due to the existence of backward and forward temporal correlations. Our analysis shows that BPL may accumulates from previous privacy leakage and FPL increases with future release. Intuitively, BPL at time t is affected by previously released data and FPL at time t will be affected by future releases. We define α-differential privacy under temporal correlation, denoted as , to formalize the privacy guarantee of a DP mechanism against , i.e., the temporal privacy leakage should be bounded in α. We prove a new form of sequential composition theorem for (different from the traditional sequential composition [31] for ε-DP).
Second, we efficiently calculate the temporal privacy leakage under given backward and forward temporal correlations. We transform the calculation of temporal privacy leakage in finding an optimal solution of a linear-fractional programming problem. This type of problem can be solved using a simplex algorithm in exponential time. By exploiting the constraints, we propose a polynomial-time algorithm to finely quantify the temporal privacy leakage.
Third, we design private data release algorithms that can be used to convert a traditional DP mechanism into one satisfying . A challenge is that the temporal privacy leakage may increase over time so that is hard to achieve when the length of release time T is unknown. In our first solution, we prove that the supremum of temporal privacy leakage may exist in some cases, and in these cases, we allocate appropriate privacy budgets to make sure the increased temporal privacy leakage will never be greater than α, no matter how long the T is. However, when T is too short for the accumulation of temporal privacy leakage to result in a significant increase, we may over-perturb the data. The second solution is to exactly achieve at each time point by finely calculating the temporal privacy leakage.
Finally, experiments with synthetic data confirm the efficiency and effectiveness of our privacy leakage quantification algorithm. We also demonstrate the impact of different degree of temporal correlations on privacy leakage.
II. Preliminaries
A. Differential Privacy
Differential privacy [12] is a formal definition of data privacy. Let D be a database and D′ be a copy of D that is different in any one tuple. D and D′ are neighboring databases. A differentially private output from D or D′ should exhibit little difference.
Definition 1 (ε-DP)
ℳ is a randomized mechanism that takes as input D and outputs r, i.e., ℳ(D) = r. ℳ satisfies ε-differential privacy (ε-DP) if the following inequality is true for any pair of neighboring databases D, D′ and all possible outputs r.
| (1) |
The parameter ε, called the privacy budget, represents the degree of privacy offered. Intuitively, a lower value of ε implies stronger privacy guarantee and a larger perturbation noise, and a higher value of ε implies a weaker privacy guarantee while possibly achieving higher accuracy.
A commonly used method to achieve ε-DP is the Laplace mechanism, which adds random noise drawn from a calibrated Laplace distribution into the aggregates to be published.
Theorem 1 (Laplace Mechanism)
Let Q : D → ℝ be a statistical query on database D. The sensitivity of Q is defined as the maximum L1 norm between Q(D) and Q(D′), i.e., Δ = maxD,D′ ‖Q(D)−Q(D′)‖1. We can achieve ε-DP by adding Laplace noise with scale Δ/ε, i.e., Lap(Δ/ε).
B. Privacy Leakage
Let us first discuss the adversaries tolerated by differential privacy, and then formalize privacy leakage w.r.t. such adversaries. Differential privacy is able to protect against the attackers who even have knowledge of all users’ data in the database except the one of the targeted victim [16]. Let i ∈ U be a user in the database D. Let Ai be an adversary who targets user i and has knowledge of where li ∈ [loc1, …, locn] denotes the data of user i. The adversary Ai observes the private output r and attempts to guess whether the possible value of user i is locj or lock where locj, lock ∈ [loc1, …, locn]. We define the privacy leakage of a DP mechanism as follows.
Definition 2 (Privacy Leakage of a DP mechanism against Ai)
Let U be a set of users in the database. Let Ai be an adversary who targets user i and knows all the tuples in the database except the one of user i. The privacy leakage of a differentially private mechanism ℳ against one Ai and all Ai, i ∈ U are defined, respectively, as follows in which li and are two different possible values of user i’s data.
In other words, the privacy budget of a DP mechanism can be considered as a metric of privacy leakage. The larger ε, the larger the privacy leakage. Hence, we can say that ℳ satisfies ε-DP if P L0(ℳ) ≤ ε. We note that a ε′-DP mechanism automatically satisfies ε-DP if ε′ < ε. For convenience, in the following parts of this paper, when we say that ℳ satisfies ε-DP, we mean that the privacy leakage is equal to ε.
C. Problem Setting
We attempt to quantify the potential privacy loss of a DP mechanism under temporal correlations in the context of continuous data release (e.g., releasing private counts at each time as shown in Figure 1). Users in the database, denoted by U, are generating data continuously. Let loc = {loc1, …, locn} be all possible values of user’s data We denote the value of user i at time t by . A trusted server collects the data of each user into the database at each time t (e.g., the columns in Figure 1(a)). A DP mechanism ℳt releases differentially private output rt independently at each time t. Our goal is to quantify and bound the potential privacy loss of ℳt against adversaries with knowledge of temporal correlations. We summarize the notations used in this paper in Table I. We note that while we use location data in Example 1, the problem setting is general for temporally correlated data.
TABLE I.
Summary of Notations.
| U | The set of users in the database | |
| i | The i-th user where i ∈ [1, |U|] | |
| loc | Value domain {loc1, …, locn} of all user’s data | |
|
|
The data of user i at time t, , | |
| Dt | The database at time t, | |
| ℳt | Differentially private mechanism over Dt | |
| rt | Differentially private output at time t | |
| Ai | The adversary who targets user i, considered in traditional DP | |
|
|
Adversary Ai with additional knowledge of temporal correlations | |
|
|
Transition matrix that represents , i.e., backward temporal correlation, known to | |
|
|
Transition matrix that represents , i.e., forward temporal correlation, known to | |
|
|
The subset of database , known to |
Our problem setting is identical to differential privacy under continual observation in the literature [1] [3] [8] [13] [15] [17] [22] [36]. In contrast to “one-shot” data release over a static database, the adversaries can observe multiple differentially private outputs, i.e., r1, …, rt. There are typically two different privacy goals in the context of continuous data release: event-level and user-level [13] [15]. The former protects each user’s single data point at time t (i.e., the neighboring databases are Dt and Dt′), whereas the latter protects the presence of a user with all her data on the timeline (i.e., the neighboring databases are {D1, …, Dt} and ). In this work, we mainly study the privacy leakage at a single time point (event-level) under temporal correlations, and we also extend the discussion to user-level privacy by studying the composability of the privacy leakage.
III. Analyzing Privacy Leakage
In the following, we first formalize adversary with temporal correlations in Section III-A. We then define and analyze temporal privacy leakage in Section III-B. We provide a new privacy notion of against temporal privacy leakage and prove its composability in Section III-C. Finally, we make a few important observations in Section III-D.
A. Adversay with Knowledge of Temporal Correlations
Markov Chain for Temporal Correlations
The Markov chain (MC) is extensively used in modeling user mobility profiles [19] [30] [32]. For a time-homogeneous first-order MC, a user’s current value only depends on the previous one. The parameter of the MC is the transition matrix, which describes the probabilities for transition between values. The sum of the probabilities in each row of the transition matrix is 1. A concrete example of transition matrix and time-reversed one for location data is shown in Figure 2. As shown in Figure 2(a), if user i is at loc1 now (time t); then, the probability of coming from loc3 (time t − 1) is 0.7, namely, . As shown in Figure 2(b), if user i was at loc3 at the previous time t − 1, then the probability of being at loc1 now (time t) is 0.6; namely, . We call the transition matrices in Figure 2(a) and (b) as backward temporal correlation and forward temporal correlation, respectively.
Fig. 2.

Examples of Temporal Correlations.
Definition 3 (Temporal Correlations)
The backward and forward temporal correlations between user i’s data and are described by transition matrices representing and , respectively.
It is reasonable to consider that the backward and/or forward temporal correlations could be acquired by adversaries. For example, the adversaries can learn them from user’s historical trajectories (or the reversed trajectories) by well studied methods such as Maximum Likelihood estimation (supervised) or Baum-Welch algorithm (unsupervised). Also, if the initial distribution of is known (i.e., ), the backward temporal correlation (i.e., ) can be derived from the forward temporal correlation (i.e., ) by the following Bayesian inference.
Since estimating temporal correlations from data is beyond the scope of this work, we assume the adversaries’ prior knowledge about temporal correlations is given in our framework.
We now define an “updated version” of Ai (in Definition 2) with knowledge of temporal correlations.
Definition 4
is a class of adversaries who have knowledge of (1) all other users’ data at each time t except the one of the targeted victim, i.e., , and (2) backward and/or forward temporal correlations represented as transition matrices and . We denote who targets user i by .
There are three types of : (i) , (ii) , (iii) , where Ø denotes that the corresponding correlations are not known to the adversaries2. For simplicity, we denote types (i) and (ii) as and , respectively. We note that is the same as the traditional DP adversary Ai without any knowledge of temporal correlations.
We now show what information can derive from the temporal correlations.
Lemma 1
The adversary who has knowledge of can derive .
Lemma 2
The adversary who has knowledge of can derive .
We omit the proofs of the lemmas due to space limitations.
B. Temporal Privacy Leakage
We now define the privacy leakage w.r.t. . For the convenience of analysis, let us assume the length of release time3 is T. The adversary observes the differentially private outputs r1, …, rt, …, rT and attempts to infer the value of user i’s data at time t, namely . Similar to Definition 2, we define the privacy leakage in terms of event-level differential privacy in the context of continual data release as described in Section II-C.
Definition 5 (Temporal Privacy Leakage, TPL)
Let be a neighboring database of Dt. Let be the tuple knowledge of . We have and where and are two different values of user i’s data at time t. Temporal Privacy Leakage (TPL) of ℳt w.r.t. a single and all , i ∈ U are defined, respectively, as follows.
| (2) |
| (3) |
| (4) |
We first analyze (i.e., Equation (2)) because it is key to solve Equation (3) or (4). We can rewrite as follows because r1, …, rT are published independently by differentially private mechanism ℳ1, …, ℳT.
| (5) |
It is clear that because PL0 indicates the privacy leakage w.r.t. one output r (refer to Definition 2). As annotated in the above equation, we define backward and forward privacy leakage as follows.
Definition 6 (Backward Privacy Leakage, BPL)
The privacy leakage of ℳt caused by r1, …, rt w.r.t. is called backward privacy leakage, defined as follows.
| (6) |
| (7) |
Definition 7 (Forward Privacy Leakage, FPL)
The privacy leakage of ℳt caused by rt, …, rT w.r.t. is called forward privacy leakage, defined by follows.
| (8) |
| (9) |
By substituting Equation (6) and (8) into (5), we have
| (10) |
Similarly, by expanding Equation (4) to one resembling Equation (5) and combining it with Equation (7) and (9), we have
| (11) |
Intuitively, BPL and FPL are the privacy leakage w.r.t. the adversaries and , respectively. TPL is the privacy leakage w.r.t. . In Equation (11), we need to minus because it is counted in both BPL and FPL. We will show more details in the following analysis.
BPL over time
For BPL, we first expand and simplify Equation (6) by Bayesian theorem and Lemma 1, is equal to
| (12) |
We now discuss the three annotated terms in the above equation. The first term indicates BPL at the previous time t − 1, the second term is the backward temporal correlation determined by , and the third term is equal to the privacy leakage w.r.t. adversaries in traditional DP (see Definition 2). Hence, BPL at time t depends on (i) BPL at time t − 1, (ii) the backward temporal correlations, and (iii) the (traditional) privacy leakage of ℳt (which is related to the privacy budget allocated to ℳt). By Equation (12), we know that if t = 1, ; if t > 1, we have the following, where ℒB(·) is a backward temporal privacy loss function for calculating the accumulated privacy loss.
| (13) |
Equation (13) reveals that the BPL is calculated recursively and may accumulate over time, as shown in Example 2 (Fig.3(a)).
Fig. 3.

Example of Temporal Privacy Leakage of Lap(1/0.1) at each time point.
Example 2 (BPL due to previous releases)
Suppose that a DP mechanism ℳt satisfies PL0(ℳt) = 0.1 for each time t ∈ [1, T], i.e., 0.1-DP at each time point. We now discuss BPL at each time point w.r.t. with knowledge of backward temporal correlations . In an extreme case, if indicates the strongest correlation, say, , then, at time t, knows , i.e., Dt = Dt−1 = ⋯ = D1 because of for any t ∈ [1, T]. Hence, the continuous data release r1, …, rt is equivalent to releasing the same database multiple times; the privacy leakage at each time point will accumulate from previous time points and increase linearly (Figure 3(a)(i)). In another extreme case, if there is no backward temporal correlation that is known to (e.g., for the Ai in Definition 2 or 4), the backward privacy leakage at each time point is PL0(ℳt), as shown in Figure 3(a)(iii). Figure 3(a)(ii) depicts the backward privacy leakage caused by , which can be finely quantified using our method (Algorithm 1) in Section IV.
FPL over time
For FPL, similar to the analysis of BPL, we expand and simplify Equation (6) by Bayesian theorem and Lemma 2, is equal to
| (14) |
By Equation (14), we know that if t = T, ; if t < T, we have the following, where ℒF(·) is a forward temporal privacy loss function for calculating · the increased privacy loss due to FPL at the next time.
| (15) |
Equation (15) reveals that FPL is calculated recursively and may increase over time, as shown in Example 3 (Fig.3(b)).
Example 3 (FPL due to future releases)
Considering the same setting in Example 2, we now discuss FPL at each time point w.r.t. with knowledge of forward temporal correlations . In an extreme case, if indicates the strongest correlation, say, , then, at time t, knows , i.e., Dt = Dt+1 = ⋯ = DT because of for any t ∈ [1, T]. Hence, the continuous data release rt, …, rT is equivalent to releasing the same database multiple times; the privacy leakage at time t will increase when every time new release (i.e., rt+1, rt+2, …) happens. For example, we see that contrary to BPL, the FPL at time 1 is the highest (due to future releases at time 1 to 10) while FPL at time 10 is the lowest (since there is no future release with respect to time 10 yet). When is r11 released, all FPL at time be will t ∈ [1, 10] updated. In another extreme case, if there is no forward temporal correlation that is known to (e.g., for the Ai in Definition 2 or ), then the forward privacy leakage at each time point is PL0(ℳt), as shown in Figure 3(b)(iii). Figure 3(b)(ii) depicts the forward privacy leakage caused by , which can be finely quantified using our method (Algorithm 1) in Section IV.
Remark 1
The extreme cases shown in Example 2 and 3 are the upper and lower bound of BPL and FPL. Hence, the backward temporal privacy loss function ℒB(·) in Equation (13) and the forward temporal privacy loss function· ℒF (·) in Equation (15) satisfy 0∗· ≤ ℒB(·) ≤ 1∗·, where · is BPL at the previous the time, and 0∗·· ≤ ℒF(·) ≤ 1∗·, where · is FPL at the next time, respectively.
From Example 2 and 3, we know that: backward temporal correlation (i.e., ) does not affect FPL, and forward temporal correlation (i.e., ) does not affect BPL. In other words, adversary only causes BPL; only causes FPL; while poses a risk on both BPL and FPL.
Figure 3(c) shows TPL, which is calculated using BPL and FPL (see Equation (11)). Given and , finely quantifying TPL is a challenge. We will design a novel algorithm to calculate them efficiently in Section IV.
C. DP under Temporal Correlations and Its Composability
In this section, we define to provide a privacy guarantee against temporal privacy leakage. We prove its sequential composition theorem and discuss the connection between and ε-DP in terms of event-level/user-level privacy [13] [15] and w-event privacy [22].
Definition 8
For all user i in the database, if TPL of ℳt (see Definition 5) is less than or equal to α, we say that ℳt satisfies α-differential privacy under temporal correlation, denoted by .
is an enhanced version of DP on temporal data. If the data are temporally independent (i.e., for all user i, both and are Ø), an ε-DP mechanism satisfies . If the data are temporally correlated (i.e., existing user i whose or is not Ø), an ε-DP mechanism satisfies where α is the increased privacy leakage and can be quantified using our framework.
One may wonder, for a sequence of mechanisms on the timeline, what is the overall privacy guarantee. Suppose that Mt satisfies εt-DP and poses risks of BPL and FPL as and , respectively. That is, ℳt satisfies at time t according to Equation (11). We formally define such overall privacy leakage based on Equation (4).
Definition 9 (TPL of a sequence of DP mechanisms)
The temporal privacy leakage of DP mechanisms {ℳt, …, ℳt+j} where j ≥ 0 is defined as follows.
It is easy to see that, if j = 0, it is event-level privacy; if t = 1 and j = |T − 1|, it is user-level privacy.
Theorem 2 (Composition under Temporal Correlations)
A sequence of DP mechanism {ℳt, …, ℳt+j} satisfies
| (16) |
We omit the proofs of Theorems 2 due to space limitations.
When t = 1 and j = T − 1 in Theorem 2, we have the following corollary because and .
Corollary 1
The temporal privacy leakage of a combined mechanism {ℳ1, …, ℳT} is .
It shows that temporal correlations do not affect the user-level privacy (i.e., protecting all the data on the timeline of each user), which is in line with the idea of group differential privacy: protecting all the correlated data in a bundle.
We now compare the privacy guarantee between DP and . As we mentioned in Section II-C, there are typically two privacy notions in continuous data release: event-level and user-level [13] [15]. Recently, w-event privacy [22] is proposed to merge the gap between event-level and user-level privacy. It protects the data in any w-length sliding window by utilizing the following sequential composition theorem of DP.
Theorem 3 (Sequential composition on independent data [31])
Suppose that ℳt satisfies εt-DP for each t ∈ [1, T]. A combined mechanism {ℳt, …, ℳt+j} satisfies .
Suppose that ℳt satisfies ε-DP for each t ∈ [1, T]. According to Theorem 3, it achieves Tε-DP on user-level and wε-DP on w-event level. We compare the privacy guarantee on independent data and temporally correlated data as follows.
It reveals that temporal correlations may blur the boundary between event-level privacy and user-level privacy. In an extreme case, the temporal privacy leakage of an ε-DP mechanism on event-level can be Tε, i.e., . Consider the examples shown in Figure 3. Under the strongest temporal correlations, ℳ10 satisfies on event-level and a combined mechanism {ℳ1, …, ℳ10} also satisfies on user-level. Essentially, it is because the adversaries may infer {D1, …, DT} (user-level) from Dt (event-level) using temporal correlations.
D. Discussion
We make a few important observations regarding our privacy analysis.
First, the temporal privacy leakage is defined in a personalized way. That is, the privacy leakage may be different for users with distinct temporal patterns (i.e., and ). We define the overall temporal privacy leakage as the maximum one for all users, so that is compatible with the traditional ε-DP mechanism (using one parameter to represent the overall privacy level) and we can convert a traditional DP mechanism to bound the temporal privacy leakage. On the other hand, our definitions also can be compatible with personalized differential privacy (PDP) mechanisms [21], in which the personalized privacy budgets, i.e., a vector [ ε1, …, εn], are allocated to each user. In other words, we can convert a PDP mechanism to bound the temporal privacy leakage for each user.
Second, in this paper, we focus on the temporally correlated data and assume that the adversary has knowledge of temporal correlations modeled by Markov chain. However, it is possible that the adversary has knowledge about more sophisticated temporal correlation model or other types of correlations, such as user-user correlations modeled by Gaussian Markov Random Field in [37]. Our contributions in this work can serve as primitives for quantifying the privacy risk under more advanced adversarial knowledge.
IV. Calculating Temporal Privacy Leakage
In this section, we design algorithms for computing backward privacy leakage (BPL) and forward privacy leakage (FPL). We first show that both of them can be transformed to the optimal solution of a linear-fractional programming problem [2]. Traditionally, this type of problem can be solved by simplex algorithm [10] in exponential time. By exploiting the constraints in this problem, we then design a method to solve it in polynomial time.
A. Problem formulation
According to the privacy analysis of BPL and FPL in Section III-B, we need to solve the backward and forward temporal privacy loss functions ℒB(·) and · ℒF(·) in Equations (13) and (15), respectively. By observing the structure of the first term in Equations (12) and (14), we can see that the calculations for recursive functions ℒB(·) and ℒF(·) are virtually in the same way. They calculate the increment of the input values (the previous BPL or the next FPL) based on temporal correlations (backward or forward). Although different degree of correlations result in different privacy loss functions, the methods for analyzing them are the same.
We now quantitatively analyze the temporal privacy leakage. In the following, we demonstrate the calculation of ℒB(·). The first term of Equation (12), i.e., is as follows.
| (17) |
We now simplify the notations in the above formula. Let two arbitrary (different) rows in be vectors q = (q1, …, qn) and d = (d1, …, dn). For example, suppose that q is the first row in the transition matrix of Figure 2(b). Then, the elements in q are: , , , etc. Let x = (x1, …, xn)T be a vector whose elements indicate with distinct values of , e.g., x1 denotes . We obtain the following by expanding in (17).
Next, we formalize the problem and constraints. Suppose that . According to the definition of BPL (as the supremum), for any xj, xk ∈ x, we have . Given x as the variable vector and q, d as the coefficient vectors, is equal to the logarithm of the objective function (18) in the following problem (18)∼(20).
| (18) |
| (19) |
| (20) |
The above is a form of linear-fractional programming [2], where the objective function is a ratio of two linear functions and the constraints are linear inequalities or equations. A linear-fractional programming problem can be converted into a sequence of linear programming problems [2] and then solved using the simplex algorithm [10] in time O(2n). When n is large, the computation is time consuming.
Bounding the objective function by constraints
We now investigate a more efficient method to solve this problem by exploiting the structure of constraints. From Inequalities (19) and (20), we know that the feasible region of the constraints are not empty and bounded; hence, an optimal solution exists. By exploiting the constraints, we prove the following theorem, which enables the optimal solution to be found in time O(n2).
We define some notations that will be frequently used in the following parts of this paper. Suppose that the variable vector x consists of two parts (subsets): x+ and x−. Let the corresponding coefficients vectors be q+, d+ and q−, d−. Let q = Σq+ and d = Σd+. For example, suppose that x+ = [x1, x3] and x− = [x2, x4, x5]. Then, we have q+ = [q1, q3], d+ = [d1, d3], q− = [q2, q4, q5], and d− = [d2, d4, q5]. In this case, q = q1 + q3 and d = d1 + d3.
Theorem 4
If the following Inequalities (21) and (22) are satisfied, the maximum value of the objective function in the problem (18)∼(20) is
| (21) |
| (22) |
Proof
See Appendix A. ■
As we mentioned previously, the calculations of ℒB(·) and ℒF (·) are identical. Therefore, given a transition matrix (or ) and the previous BPL (or the next FPL), the increment of the backward (or forward) privacy loss is the maximum value in the above theorem for any two rows q and d in (or ). We denote and by and , respectively.
| (23) |
| (24) |
It is easy to see that we can always find such q+ and d+ satisfying Inequalities (21) and (22). Further, we give the following corollary for finding q+ and d+.
Corollary 2
If Inequalities (21) and (22) are satisfied, we have qj > dj in which qj ∈ q+ and dj ∈ d+.
Now, we simply examine the ℒB(·) and ℒF (·) in Equations (23) and (24). First, we have and because of q > d. Second, when q and d have the largest difference (e.g., q = (1, 0),d = (0, 1) and hence q = 1, d = 0), it follows that and . Therefore, it is in accordance with Remark 1. The advantage of Equations (23) and (24) is being able to finely quantify BPL and FPL w.r.t. arbitrary and .
B. Privacy Leakage Quantification Algorithm
The next question is how do we find q and d (or q+ and d+) that give the maximum objective function. Inequalities (21) and (22) in Theorem 4 are sufficient conditions for obtaining such optimal value. Corollary 2 gives a necessary condition for satisfying Inequalities (21) and (22). Based on the above analysis, we design Algorithm 1 for computing BPL or FPL.
Algorithm 1.
Finding BPL or FPL
|
Computing BPL or FPL by solving the linear-fractional programming
According to the definition of BPL and FPL, we need to return the maximum privacy leakage (Line 12) w.r.t. any two rows in the given transition matrix (Line 2). Lines 3∼11 are to solve one linear-fractional programming problem (18)∼(20) w.r.t two specific rows in the transition matrix. In Lines 3 and 4, we divide the variable vector x into two parts according to Corollary 2, which gives the necessary condition for finding the maximum solution: if the coefficients qj ≤ dj, they are not in q+ and d+ that satisfy Inequalities (21) and (22). In other words, if qj > dj, they are “candidates” in q+ and d+ that gives the maximum objective function. In Lines 5∼11, we further check q+ and d+ whether they satisfy Inequalities (21) and (22). According to Line 7, it is clear that any subset of q+ and d+ automatically satisfy Inequality (22). In Lines 8∼10, we remove the pairs qj ∈ q+ and dj ∈ d+ that do not satisfy Inequality (21). Note that the values of q and d (recall that q = Σq+ and d = Σd+) will be recalculated due to such “update” (deletion in Line 10). If q and d are updated, we need to recheck each pair of qj and dj in the current set of q+ and d+ until every pair of them satisfies Inequality (21).
A subtle question may arise regarding such “update”. In Lines 8∼10, if several pairs of qj and dj do not satisfy Inequality (21), say, {q1, d1} and {q2, d2}, one may wonder if it is possible that, after removing {q1, d1 } from q+ and d+, Inequality (21) can be satisfied for {q2, d2} due to the update of q and d, i.e., . We show that this is not possible. If , we have . Hence, . Therefore, we can remove multiple pairs of qj and dj that do not satisfy Inequality (21) at one time (Lines 8∼10).
It is easy to know that, if qi = di for each i ∈ [1, n], the update will be terminated with empty q+ and d+. In this case, we have q = d; hence ℒB(·) and ℒF (·) are 0.
Algorithm complexity
The time complexity for solving one linear-fractional programming problem (Lines 3 ∼ 11) w.r.t. two specific rows of the transition matrix is O(n2) because Line 9 may iterate n ∗ (n − 1) times in the worst case. The overall time complexity of Algorithm 1 is O(n4).
V. Bounding Temporal Privacy Leakage
In this section, we design private data release algorithms that can be used to convert a traditional DP mechanism into one satisfying by allocating calibrated privacy budgets.
We first investigate the upper bound of BPL and FPL. We have demonstrated that BPL and FPL may accumulate and increase over time in Figure 3. A natural question is that: is there a limit of BPL and FPL over time. For ℳt that satisfies ε-DP at each t ∈ [1, T], we know that a loose upper bound of BPL or FPL over time T is Tε according to Remark 1. When T is unknown, giving the upper bound of BPL or FPL is a challenge.
Theorem 5
Given a transition matrix (or ) representing temporal correlation, let q and d be the ones that give the maximum value in Equation (23) (or Equation (24)) and q ≠ d. For ℳt that satisfies ε-DP at each t ∈ [1, T], there are four cases regarding the supremum of BPL (or FPL) over time.
We omit the proof due to space limitations.
The above theorem is applicable for both BPL and FPL because the calculation of BPL and FPL is the same. According to the previous analysis, we can consider that the growth of BPL and FPL is in the same manner but in the reversed directions on the timeline (see Figure 3(a)(b)).
Example 4 (The supremum of the increased BPL over time)
Suppose that ℳt satisfies ε-DP at each time point. In Figure 4, we demonstrate the maximum BPL w.r.t. different ε and different transition matrices that represent . In (a) and (b), the supremum does not exist. In (c) and (d), we can directly calculate the supremum using Theorem 5. The results are in line with the ones from computing BPL step by step at each time point using Algorithm 1.
Fig. 4.

Examples of the maximum BPL over time.
Achieving by limiting upper bound
We now design a data release algorithm utilizing Theorem 5 to bound TPL. Theorem 5 tells us that, if it is not the strongest temporal correlation (i.e., d = 0 and q = 1), we may bound BPL or FPL within a desired value by allocating an appropriate privacy budget to a traditional DP mechanism at each time point. A problem is that, in Theorem 5, the q and d are assumed to be the ones that give the maximum value of the objective function; however, they are initially unknown. According to our analysis of Algorithm 1, such q and d depend on not only the given transition matrix but also the previous BPL (or the next FPL); however, the “previous BPL” is not clear when BPL achieves its supremum at some time point. To solve this problem, we can consider that, if T is approaching infinite, BPL at time T and T +1 are both supremum, so that we can find q and d that give the maximum objective function using Algorithm 1 by setting the “previous BPL” to such supremum. Now, we can find an appropriate ε to bound BPL based on Theorem 5. For example, if d = 0 and q ≠ 1, we can solve an equation with onevariable (we can prove that a positive solution always exists) where αB is a desirable privacy level. Similarly, we can restrict FPL within a given value. We use this idea to bound both BPL and FPL, as shown in Algorithm 2.
Algorithm 2.
Releasing Data with by upper bound
|
Achieving by privacy leakage quantification
We now design Algorithm 3 to overcome a drawback of Algorithm 2: when T is too short for the accumulation of temporal privacy leakage to result in a significant increase, we may not take full advantage of the privacy budgets. Our observation is that, the DP mechanisms at the first and last time points should be allocated more budgets because they are relatively more “influential” in term of privacy loss. For example, BPL of ℳt, t ∈ [2, T] is affected by the first mechanism ℳ1, and FPL of ℳt, t ∈ [1, T − 1] is affected by the last mechanism ℳT. Our idea is to allocate more privacy budgets to ℳ1 and ℳT so that both BPL and FPL are bounded in given values at each time point. For example, if we want that BPL at every time points are exactly the same value αB, i.e., BPL(ℳ1) =⋯= BPL(ℳT) = αB, then we need to make sure: (i) PL0(ℳ1) = αB and (ii) in which is the privacy budget allocated in the “middle” of the timeline, i.e., from 2 to T − 1. We can solve the above equations to obtain ensuring BPL(ℳ1) =⋯= BPL(ℳT) = αB. Similarly, we can bound FPL in a given αF by finding another . If , we can assign εm as to ensure both BPL and FPL are bounded in min{αB, αF}. It is easy to know that, when , we can exactly achieve at each time point.
Algorithm 3.
Releasing Data with by quantification
|
We note that, the initializations of αB in both Algorithm 2 and 3 are nontrivial: too large or too small αB results in more iterations to converge to . We can prove that always can be achieved. We delegate the detailed descriptions and proofs to the long version of our paper.
VI. Experimental Evaluation
In this section, we design experiments for the following: (1) verifying the runtime and correctness of our privacy leakage quantification algorithm (Algorithm 1), (2) investigating the impact of the temporal correlations on privacy leakage and (3) evaluating the data release Algorithms 2 and 3. We implemented all the algorithms in Java and conducted the experiments on a machine with an Intel Core i7 2.8GHz CPU and 16 GB RAM running OSX El Capitan.
The setting of temporal correlations
To evaluate if our privacy loss quantification algorithms can perform well under diverse circumstances, we need different degrees of temporal correlations. Although there are well studied methods to estimate the temporal correlations, in our experiments, we generate the correlations (transition matrices) directly to eliminate the effect of different estimation algorithms or datasets.
We now present a method for obtaining different degrees of temporal correlations. First, we generate a transition matrix indicating the “strongest” correlation that contains a cell with probability 1.0 at each row but for different columns (this type of transition matrix will lead to an upper bound of TPL, as shown in Examples 2 and 3). Then, we perform Laplacian smoothing [33], which is a method originally used to smooth a polygonal mesh, to uniformize the probabilities of Pi in different degree. Let pjk be an element at the jth row and kth column of the matrix Pi. The new probabilities are generated using Equation (25), where s is a positive parameter that controls the degrees of smoothing. A smaller s results in a stronger temporal correlation.
| (25) |
We note that, the degrees of correlation with s are only comparable with each other under the same n (i.e., |loc|).
A. Runtime of Privacy Quantification Algorithms
In this section, we compare the runtime of our algorithm with <monospace>Gurobi</monospace>5 and <monospace>lp_solve</monospace>6, which are two well-known softwares for solving optimization problems, e.g., the linear-fractional programming problem (18)∼(20) in our setting. We run our privacy quantification algorithm 30 times, and run <monospace>Gurobi</monospace> and <monospace>lp_solve</monospace> 5 times (because they are very time-consuming), and then calculate the average runtime for each of them. At each time, we randomly generate a transition matrix Pi whose elements are uniformly drawn from [0, 1]. We verified that the optimal solution returned by the three algorithms are the same. In the following, we describe two factors that may affect the runtime: α as BPL at the previous time point or FPL at the next time point (i.e., one input of Algorithm 1), n as the domain size of transition matrix. The results are shown in Figure 5.
Fig. 5.

Runtime of Privacy Quantification Algorithms.
Runtime vs. n
In Figure 5(a), we show the runtime of the three algorithms with inputs of α = 10 and a n × n random probability matrix Pi. The runtime of all algorithms increase along with n because n is the number of variables in our linear-fractional program. Algorithm 1 significantly outperforms <monospace>Gurobi</monospace> and <monospace>lp_solve</monospace>. For example, in Figure 5(a), when n = 150, Algorithm 1 only spends 11 seconds, whereas the runtime of <monospace>Gurobi</monospace> and <monospace>lp_solve</monospace> are about 47 minutes and 38 hours, respectively. Since <monospace>Gurobi</monospace> and <monospace>lp_solve</monospace> spend tremendous time when n > 150, we omit them in the graph.
Runtime vs. α
In Figure 5(b), we show that, a larger previous BPL (or the next FPL), i.e., α, may lead to higher runtime of Algorithm 1, whereas <monospace>Gurobi</monospace> and <monospace>lp_solve</monospace> are stable for varying α. The reason is that, when α is large, Algorithm 1 may take more time in Lines 9 and 10 for updating each pair of qj ∈ q+ and dj ∈ d+ to satisfy Inequality (21). An update in Line 10 is more likely to occur due to a large α because is increasing with α. However, such growth of runtime along with α will not last so long because the update happens n − 1 times in the worse case (according to our previous analysis, the update will be terminated if only one element is left in q+). As shown in Figure 5(b), when α > 10, the runtime of Algorithm 1 becomes stable. We only obtain a part of the runtime for <monospace>lp_solve</monospace> because a precision problem occurs when α ≥ 10 due to the design of <monospace>lp_solve</monospace>.
B. Impact of Temporal Correlations on Privacy Leakage
In this section, for the convenience of explanation, we only present the impact of temporal correlations on BPL because the growth of BPL and FPL are in the same way but in the reversed directions on the timeline. We examined s values in Equation (25) ranging from 0.005 to 1. We set n to 50 and 200. Let ε be the privacy budget of ℳt at each time point. We test ε = 1 and 0.1. The results are shown in Figure 6 and are summarized as follows.
Fig. 6.

Evaluation of BPL.
Privacy Leakage vs. s
Figure 6 shows that the privacy leakage caused by a non-trivial temporal correlation will increase over time, and such growth first increases sharply and then remains stable because the increment is calculated recursively. The increase caused by a stronger temporal correlations (i.e., smaller s) is steeper, and the time for the increase is longer. Consequently, stronger correlations result in higher privacy leakage.
Privacy Leakage vs. ε
Comparing Figures 6(a) and (b), we found that 0.1-DP significantly delayed the growth of privacy leakage. Taking s = 0.005, for example, the noticeable increase continues for almost 8 timestamps when ε = 1 (Figures 6(a)), whereas it continues for approximately 80 timestamps when ε = 0.1 (Figures 6(b)). However, after a sufficient long time, the privacy leakage in the case of ε = 0.1 is not substantially lower than that of ε = 1 under stronger temporal correlations. This is because, although the privacy leakage is eliminated at each time point by setting a small privacy budget, the adversaries can eventually learn sufficient information from the continuous releases.
Privacy Leakage vs. n
Under the same s, TPL is smaller when n (dimension of the transition matrix) is larger, as shown in the lines s = 0.005 with n = 50 and n = 200 of Figure 6. This is because the transition matrices tend to be uniform (weaker correlations) when the dimension is larger.
In conclusion, the experiments reveal that our quantification algorithms can flexibly respond to different degrees of temporal correlations.
C. Evaluation of Data Releasing Algorithms
In this section, we first show a visualization of privacy allocation of Algorithms 2 and 3, then we compare the data utility in terms of Laplace noise.
Figure 7 shows an example of budget allocation, w.r.t. and . The goal is . It is easy to see that Algorithm 3 has better data utility because it exactly achieves the desired privacy level.
Fig. 7.

Data Release Algorithms with .
Figure 8 shows the data utility of Algorithms 2 and 3 with . We calculate the absolute value of the Laplace noise with the allocated budgets (as shown in Figure 7). Higher value of noise indicates lower data utility. In Figure 8(a), we test the data utility under backward and forward temporal correlation both with parameter s = 0.001, which means relatively strong correlation. It shows that, when T is short, Algorithm 3 outperforms Algorithm 2. In other words, regardless of how long T is, Algorithm 2 perturbs data in the same way. In Figure 8(b), we investigate the data utility under different degree of correlations. The dash line indicates the absolute value of Laplace noise if no temporal correlation exists (privacy budget is 2). It is easy to see that the data utility significantly decays under strong correlation s = 0.01.
Fig. 8.

Data utility of mechanisms.
VII. Related Work
Several studies have questioned whether differential privacy is valid for correlated data. Kifer and Machanavajjhala [23] [24] [25] first raised the important issue that differential privacy may not guarantee privacy if adversaries know the data correlations. In their line of work, they [23] argued that it is not possible to ensure any utility in addition to privacy without making assumptions about the data-generating distribution and the background knowledge available to an adversary. To this end, they proposed a general and customizable privacy framework called PufferFish, in which the potential secrets, discriminative pairs, and data generation need to be explicitly defined. Yang et al. [37] further investigated differential privacy on correlated tuples described using a proposed Gaussian correlation model. The privacy leakage w.r.t. adversaries with specified prior knowledge can be efficiently computed.
Zhu et al. [38] proposed correlated differential privacy by redefining the sensitivity of queries on correlated data; however, the privacy guarantee provided by this definition for spatio-temporal data is unclear. Very recently, Liu et al. [29] proposed dependent differential privacy by introducing dependence coefficients for analyzing the sensitivity of different queries under probabilistic dependences between tuples. However, such dependence coefficients do not easily account for the spatio-temporal correlations.
Dwork et al. first studied differential privacy under continual observation and proposed event-level/user-level privacy [13] [15]. The previous studies in this setting focused on the problems of high dimension [1] [27] [36], infinite sequence [6] [7] [22], sliding window queries [5], and real-time publishing [17]. [28]. None of them addressed the problem of temporally correlated data.
To the best of our knowledge, no study has reported the risk of differential privacy under temporal correlations for the continuous aggregate release setting. Although a few studies [32] [35] have considered a similar adversarial model in which the adversaries have prior knowledge of temporal correlations represented by Markov chains, they focused on location privacy in the single-user setting. Shokri et al. [32] proposed an evaluation framework for location privacy protection, assuming that the adversary knows the transition probabilities of each user. Xiao et al. [35] proposed a mechanism extending DP for single user location sharing under temporal correlations modeled by Markov chains. In contrast, the scenario in this paper focuses on quantifying the privacy loss of traditional DP mechanisms under temporal correlations for continuous aggregate release setting.
VIII. Conclusions
In this paper, we quantified the risk of differential privacy under temporal correlations by formalizing, analyzing and calculating the privacy loss against adversaries who have varying degrees of temporal correlations. This work opens up interesting future research directions, such as modeling temporal correlations with other type of correlations (e.g. tuple-wise correlations), and combining our methods with the previous studies that neglected the effect of temporal correlations in order to bound the temporal privacy leakage.
TABLE II.
The privacy guarantee of ε-DP mechanisms.
Acknowledgments
This work was supported by JSPS KAKENHI Grant Number 16K12437, the National Institute of Health (NIH) under award number R01GM114612, the Patient-Centered Outcomes Research Institute (PCORI) under contract ME-1310-07058, and the National Science Foundation under award CNS-1618932.
Appendix A Proof of Theorem 4
We need Dinkelbach’s Theorem and Lemma 3 in our proof.
Theorem 6 (Dinkelbach’s Theorem [11])
In a linear-fractional programming problem, suppose that the variable vector is x and the objective function is represented as . Vector x∗ is an optimal solution if and only if
| (26) |
Lemma 3
For the following maximization problem (k1, …, kn ∈ ℝ) with the same constraints as the ones in the linear-fractional programming (18)∼(20),
an optimal solution is as follows: if ki > 0, let where m is a positive real number; if ki ≤ 0, let xi = m.
Proof
Without loss of generality, we suppose that the smallest value in the optimal solution is xn. Let yj be for j ∈ [1, n − 1]; then, . Replacing xj with yj and setting xn = m, we have a new objective function whose solution is equivalent to the original one. Because the only constraint is , the following is an optimal solution for the maximum objective function: if kj > 0, let ; if kj ≤ 0, let yj = 1. ■
Proof of Theorem 4
We first prove that, under the conditions shown in Theorem 4, i.e., Inequalities (21) and (22), an optimal solution of the problem (18)~(20) is:
| (27) |
where m is a positive real number.
For convenience, we rewrite our objective function as in which Q(x) = qx and D(x) = dx. Substituting x* of Equation (27) into Q(x) and D(x), we have and (recall that q = Σq+ and d = Σd+). Then, we can rewrite Inequalities (21) and (22) in Theorem 4 as follows.
| (28) |
| (29) |
According to Dinkelbach’s Theorem, to prove x* in (27) is an optimal solution, we only need to prove the following equation because of D(x*) > 0.
| (30) |
We expand the above equation as follows.
| (31) |
By Equations (28) and (29)), we have D(x∗)q+−Q(x∗)d+ > 0 and D(x∗)q− − Q(x∗)d− ≤ 0. Hence, according to Lemma 3, we can obtain the maximum value in Equation (30) by setting and x− = [m] where m is a positive real number. Now, we obtain the maximum value in Equation (30).
Therefore, by Dinkelbach’s Theorem, x* is an optimal solution for the problem (18)∼(20). Substituting them into the objective function (18), we obtain the maximum value . ■
Footnotes
Lap(b) denotes a Laplace distribution with variance 2b2.
The adversaries of types (i) and (ii) will not “guess” the missing correlations; otherwise, they fall under type (iii).
In this paper, we do not need to know the length of release time in advance.
In this case, given the current and i.e., , the adversary cannot derive .
http://www.gurobi.com/. Commercial software. We use version 6.5.
http://lpsolve.sourceforge.net/. Open source software. We use version 5.5.
References
- 1.Acs G, Castelluccia C. A case study: Privacy preserving release of spatio-temporal density in paris. KDD. 2014:1679–1688. [Google Scholar]
- 2.Bajalinov EB. Linear-Fractional Programming Theory, Methods, Applications and Software. 2003;84 [Google Scholar]
- 3.Bolot J, Fawaz N, Muthukrishnan S, Nikolov A, Taft N. Private decayed predicate sums on streams. ICDT. 2013:284–295. [Google Scholar]
- 4.Bradley CA, Rolka H, Walker D, Loonsk J. BioSense: implementation of a national early event detection and situational awareness system. MMWR supplements. 2005;54:11–19. [PubMed] [Google Scholar]
- 5.Cao J, Xiao Q, Ghinita G, Li N, Bertino E, Tan KL. Efficient and accurate strategies for differentially-private sliding window queries. EDBT. 2013:191–202. [Google Scholar]
- 6.Cao Y, Yoshikawa M. Differentially private real-time data release over infinite trajectory streams. 2015 16th IEEE International Conference on Mobile Data Management (MDM) 2015;2:68–73. [Google Scholar]
- 7.Cao Y, Yoshikawa M. Differentially private real-time data publishing over infinite trajectory streams. IEICE Trans Inf& Syst. 2016;E99-D(1) [Google Scholar]
- 8.Chan THH, Shi E, Song D. Private and continual release of statistics. ACM Trans Inf Syst Secur. 2011;14(3):26:1–26:24. [Google Scholar]
- 9.Chen R, Fung BC, Yu PS, Desai BC. Correlated network data publication via differential privacy. VLDBJ. 2014;23(4):653–676. [Google Scholar]
- 10.Dantzig GB. Linear Programming and Extensions. Princeton University Press; 1998. [Google Scholar]
- 11.Dinkelbach W. On nonlinear fractional programming. Management Science. 1967;13(7):492–498. [Google Scholar]
- 12.Dwork C. Differential privacy. ICALP. 2006:1–12. [Google Scholar]
- 13.Dwork C. Differential privacy in new settings. SODA. 2010:174–183. [Google Scholar]
- 14.Dwork C, McSherry F, Nissim K, Smith A. Calibrating noise to sensitivity in private data analysis. Lecture Notes in Computer Science. 2006;3876:265–284. [Google Scholar]
- 15.Dwork C, Naor M, Pitassi T, Rothblum GN. Differential privacy under continual observation. STOC. 2010:715–724. [Google Scholar]
- 16.Dwork C, Roth A. The Algorithmic Foundations of Differential Privacy. 2013;9 [Google Scholar]
- 17.Fan L, Xiong L, Sunderam V. FAST: differentially private real-time aggregate monitor with filtering and adaptive sampling. SIGMOD. 2013:1065–1068. [Google Scholar]
- 18.Federal Highway Administration (FHWA) Traffic monitoring guide. 2013 [Google Scholar]
- 19.Gambs S, Killijian M-O, del Prado Cortez MN. Next place prediction using mobility markov chains. MPM. 2012;3:1–3. 6. [Google Scholar]
- 20.Horanont T, Phithakkitnukoon S, Leong TW, Sekimoto Y, Shibasaki R. Weather effects on the patterns of people’s everyday activities: A study using GPS traces of mobile phone users. PLoS ONE. 2013;8(12):e81153. doi: 10.1371/journal.pone.0081153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Jorgensen Z, Yu T, Cormode G. Conservative or liberal? personalized differential privacy. ICDE. 2015:1023–1034. [Google Scholar]
- 22.Kellaris G, Papadopoulos S, Xiao X, Papadias D. Differentially private event sequences over infinite streams. PVLDB. 2014;7(12):1155–1166. [Google Scholar]
- 23.Kifer D, Machanavajjhala A. No free lunch in data privacy. SIGMOD. 2011:193–204. [Google Scholar]
- 24.Kifer D, Machanavajjhala A. A rigorous and customizable framework for privacy. PODS. 2012:77–88. [Google Scholar]
- 25.Kifer D, Machanavajjhala A. Pufferfish: A framework for mathematical privacy definitions. ACM Trans Database Syst. 2014;39(1):3:1–3:36. [Google Scholar]
- 26.Kosala R, Blockeel H. Web mining research: A survey. SIGKDD Explor Newsl. 2000;2(1):1–15. [Google Scholar]
- 27.Li H, Xiong L, Jiang X. Differentially private synthesization of multi-dimensional data using copula functions. EDBT. 2014:475–486. doi: 10.5441/002/edbt.2014.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Li H, Xiong L, Jiang X, Liu J. Differentially private histogram publication for dynamic datasets: an adaptive sampling approach. CIKM. 2015:1001–1010. doi: 10.1145/2806416.2806441. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Liu, Supriyo C, Prateek M. Dependence makes you vulnerable: Differential privacy under dependent tuples. NDSS. 2016 [Google Scholar]
- 30.Mathew W, Raposo R, Martins B. Predicting future locations with hidden markov models. UbiComp. 2012:911–918. [Google Scholar]
- 31.McSherry FD. Privacy integrated queries: an extensible platform for privacy-preserving data analysis. Proceedings of the 2009 ACM SIGMOD. 2009:19–30. [Google Scholar]
- 32.Shokri R, Theodorakopoulos G, Le Boudec JY, Hubaux JP. Quantifying location privacy. SP. 2011:247–262. [Google Scholar]
- 33.Sorkine O, Cohen-Or D, Lipman Y, Alexa M, Rössl C, Seidel HP. Laplacian surface editing. SGP. 2004:175–184. [Google Scholar]
- 34.Theodorakopoulos G, Shokri R, Troncoso C, Hubaux JP, Le Boudec JY. Prolonging the hide-and-seek game: Optimal trajectory privacy for location-based services. WPES. 2014:73–82. [Google Scholar]
- 35.Xiao Y, Xiong L. Protecting locations with differential privacy under temporal correlations. CCS. 2015:1298–1309. [Google Scholar]
- 36.Xiao Y, Xiong L, Fan L, Goryczka S, Li H. DPCube: differentially private histogram release through multidimensional partitioning. Trans Data Privacy. 2014;7(3):195–222. [Google Scholar]
- 37.Yang B, Sato I, Nakagawa H. Bayesian differential privacy on correlated data. SIGMOD. 2015:747–762. [Google Scholar]
- 38.Zhu T, Xiong P, Li G, Zhou W. Correlated differential privacy: Hiding information in non-IID data set. IEEE Trans Inf Forensics Security. 2015;10(2):229–242. [Google Scholar]
