Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Mar 11.
Published in final edited form as: Mach Learn. 2017 Jul 13;106(9-10):1681–1704. doi: 10.1007/s10994-017-5656-2

Preserving differential privacy in convolutional deep belief networks

NhatHai Phan 1, Xintao Wu 2, Dejing Dou 3
PMCID: PMC6411072  NIHMSID: NIHMS983875  PMID: 30867620

Abstract

The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing ϵ-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.

Keywords: Deep learning, Differential privacy, Human behavior prediction, Health informatics, Image classification

1. Introduction

Today, amid rapid adoption of electronic health records and wearables, the global health care systems are systematically collecting longitudinal patient health information, e.g., diagnoses, medication, lab tests, procedures, demography, clinical notes, etc. The patient health information is generated by one or more encounters in any healthcare delivery systems (Jamoom et al. 2016). Healthcare data is now measured in exabytes, and it will reach the zettabyte and the yottabyte range in the near future (Fang et al. 2016). Although appropriate in a variety of situations, many traditional methods of analysis do not automatically capture complex and hidden features from large-scale and perhaps unlabeled data (Miotto et al. 2016). In practice, many health applications depend on including domain knowledge to construct relevant features, some of which are further based on supplemental data. This process is not straightforward and time consuming. That may result in missing opportunities to discover novel patterns and features.

This is where deep learning, which is one of the state-of-the-art machine learning techniques, comes in to take advantage of the potential that large-scale healthcare data holds, especially in the age of digital health. Deep neural networks can discover novel patterns and dependencies in both unlabeled and labeled data by applying state-of-the-art training algorithms, e.g., greedy-layer wise (Hinton et al. 2006), contrastive divergent algorithm (Hinton 2002), etc. That makes it easier to extract useful information when building classifiers and predictors (LeCun et al. 2015).

Deep learning has applications in a number of healthcare areas, e.g., phenotype extraction and health risk prediction (Cheng et al. 2016), prediction of the development of various diseases including schizophrenia, a variety of cancers, diabetes, heart failure, etc. (Choi et al. 2016; Li et al. 2015; Miotto et al. 2016; Roumia and Steinhubl 2014; Wu et al. 2010), prediction of risk of readmission (Wu et al. 2010), Alzheimer’s diagnosis (Liu et al. 2014; Ortiz et al. 2016), risk prediction for chronic kidney disease progression (Perotte et al. 2015), physical activity prediction (Phan et al. 2015a, b, 2016a, c), feature learning from fMRI data (Plis et al. 2014), diagnosis code assignment (Gottlieb et al. 2013; Perotte et al. 2014), reconstruction of brain circuits (Helmstaedter et al. 2013), prediction of the activity of potential drug molecules (Ma et al. 2015), the effects of mutations in non-coding DNA on gene expressions (Leung et al. 2014; Xiong et al. 2015), and many more.

The development of deep learning in the domain of medicine and healthcare presents obvious privacy issues, when deep neural networks are built based on patients’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. To convince individuals to allow that their data be included in deep learning projects, principled and rigorous privacy guarantees must be provided. However, only a few deep learning techniques have yet been developed that incorporate privacy protections. In clinical trials, such lack of protection and efficacy may put patient data at high risk and expose healthcare providers to legal action based on HIPAA/HITECH law (U.S. Department of Health and Human Services 2016a, b). Motivated by this, we aim to develop an algorithm to preserve privacy in fundamental deep learning models in this paper.

Releasing sensitive results of statistical analyses and data mining while protecting privacy has been studied in the past few decades. One state-of-the-art privacy model is ϵ-differential privacy (Dwork et al. 2006). A differential privacy model ensures that the adversary cannot infer any information about any particular data record with high confidence (controlled by a privacy budget ϵ) from the released learning models. This strong standard for privacy guarantees is still valid, even if the adversary possesses all the remaining tuples of the sensitive data. The privacy budget ϵ controls the amount by which the output distributions induced by two neighboring databases may differ. We say that two databases are neighboring if they differ in a single data record, that is, if one data record is present in one database and absent in the other. It is clear that the smaller values of ϵ enforce a stronger privacy guarantee. This is because it is more difficult to infer any particular data record by distinguishing any two neighboring databases from the output distributions. Differential privacy research has been studied from the theoretical perspective, e.g., Chaudhuri and Monteleoni (2008a), Hay et al. (2010), Kifer and Machanavajjhala (2011) and Lee and Clifton (2012). Different types of mechanisms [e.g., the Laplace mechanism (Dwork et al. 2006), the smooth sensitivity (Nissim et al. 2007), the exponential mechanism (McSherry and Talwar 2007a), and the perturbation of objective function (Chaudhuri and Monteleoni 2008a)] have been studied to enforce differential privacy.

Combining differential privacy and deep learning, i.e., the two state-of-the-art techniques in privacy preserving and machine learning, is timely and crucial. This is a non-trivial task, and therefore only a few scientific studies have been conducted. In Shokri and Shmatikov (2015), the authors proposed a distributed training method, which directly injects noise into gradient descents of parameters, to preserve privacy in neural networks. The method is attractive for applications of deep learning on mobile devices. However, it may consume an unnecessarily large portion of the privacy budget to ensure model accuracy, as the number of training epochs and the number of shared parameters among multiple parties are often large. To improve this, based on the composition theorem (Dwork and Lei 2009; Abadi et al. 2016) proposed a privacy accountant, which keeps track of privacy spending and enforces applicable privacy policies. The approach is still dependent on the number of training epochs. With a small privacy budget ϵ, only a small number of epochs can be used to train the model. In practice, that could potentially affect the model utility, when the number of training epochs needs to be large to guarantee the model accuracy.

Recently, Phan et al. (2016c) proposed deep private auto-encoders (dPAs), in which differential privacy is enforced by perturbing the objective functions of deep auto-encoders (Bengio 2009). It is worthy to note that the privacy budget consumed by dPAs is independent of the number of training epochs. A different method, named CryptoNets, was proposed in Dowlin et al. (2016) towards the application of neural networks to encrypted data. A data owner can send their encrypted data to a cloud service that hosts the network, and get encrypted predictions in return. This method is different from our context, since it does not aim at releasing learning models under privacy protections.

Existing differential privacy preserving algorithms in deep learning pose major concerns about their applicability. They are either designed for a specific deep learning model, i.e., deep auto-encoders (Phan et al. 2016c), or they are affected by the number of training epochs (Shokri and Shmatikov 2015; Abadi et al. 2016). Therefore, there is an urgent demand for the development of a privacy preserving framework, such that: (1) It is totally independent of the number of training epochs in consuming privacy budget; and (2) It has the potential to be applied in typical energy-based deep neural networks. Such frameworks will significantly promote the application of privacy preservation in deep learning.

Motivated by this, we aim at developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) (Lee et al. 2009) under differential privacy. CDBN is a typical and well-known deep learning model. It is an energy-based model. Preserving differential privacy in CDBNs is non-trivial, since CDBNs are more complicated compared with other fundamental models, such as auto-encoders and Restricted Boltzmann Machines (RBM) (Smolensky 1986), in terms of structural designs and learning algorithms. In fact, there are multiple groups of hidden units in each of which parameters are shared in a CDBN. Inappropriate analysis might result in consuming too much of a privacy budget in training phases. The privacy consumption also must be independent of the number of training epochs to guarantee the potential to work with large datasets.

Our key idea is to apply Chebyshev Expansion (Rivlin 1990) to derive polynomial approximations of non-linear objective functions used in CDBNs, such that the design of differential privacy-preserving deep learning is feasible. Then, we inject noise into these polynomial forms, so that the ϵ-differential privacy is satisfied in the training phases of each hidden layer by leveraging functional mechanism (Zhang et al. 2012). Third, hidden layers now become private hidden layers, which can be stacked on each other to produce a private convolutional deep belief network (pCDBN).

To demonstrate the effectiveness of our framework, we applied our model for binomial human behavior prediction and classification tasks in a health social network. A novel human behavior model based on the pCDBN is proposed to predict whether an overweight or obese individual will increase physical exercise in a real health social network. To illustrate the ability to work with large-scale datasets of our model, we also conducted additional experiments on the well-known handwriting digit dataset (MNIST data) (Lecun et al. 1998). We compare our model with the private stochastic gradient descent algorithm, denoted pSGD, from Abadi et al. (2016), and the deep private auto-encoders (dPAs) (Phan et al. 2016c). The pSGD and dPAs are the state-of-the-art algorithms in preserving differential privacy in deep learning. Theoretical analysis and rigorous experimental evaluations show that our model is highly effective. It significantly outperforms existing solutions.

The rest of the paper is organized as follows. In Sect. 2, we introduce preliminaries and related works. We present our private convolutional deep belief network in Sect. 3. The experimental evaluation is in Sect. 4, and we conclude the paper in Sect. 5.

2. Preliminaries and related works

In this section, we briefly revisit the definition of differential privacy, functional mechanism (Zhang et al. 2012), convolutional deep belief networks (Lee et al. 2009), and the Chebyshev Expansion (Rivlin 1990). Let D be a database that contains n tuples t1, t2, …, tn and d+1 attributes X1, X2, …, Xd, Y. For each tuple ti = (xi1, xi2, …, xid, yi), we assume, without loss of generality, j=1dxij21 where xi j ≥ 0, yi follows a binomial distribution. Our objective is to construct a deep neural network ρ from D that (i) takes xi = (xi1, xi2, …, xid) as input and (ii) outputs a prediction of yi that is as accurate as possible. ti and xi are used exchangeably to indicate the data tuple i. The model function ρ contains a model parameter vector W. To evaluate whether W leads to an accurate model, a cost function fD(W) is often used to measure the difference between the original and predicted values of yi. As the released model parameter W may disclose sensitive information of D, to protect the privacy, we require that the model training should be performed with an algorithm that satisfies ϵ-differential privacy.

Differential privacy (Dwork et al. 2006) establishes a strong standard for privacy guarantees for algorithms, e.g., training algorithms of machine learning models, on aggregate databases. It is defined in the context of neighboring databases. We say that two databases are neighboring if they differ in a single data record. That is, if one data record is present in one database and absent in the other. The definition of differential privacy is as follows:

Definition 1 [ϵ-Different Privacy (Dwork et al. 2006)]. A randomized algorithm A fulfills ϵ-differential privacy, iff for any two databases D and D′ differing at most one tuple, and for all ORange(A), we have:

Pr[A(D)=0]eϵPr[A(D)=0] (1)

where the privacy budget ϵ controls the amount by which the distributions induced by two neighboring datasets may differ. Smaller values of ϵ enforce a stronger privacy guarantee of A.

A general method for computing an approximation to any function f (on D) while preserving ϵ-differential privacy is the Laplace mechanism (Dwork et al. 2006), where the output of f is a vector of real numbers. In particular, the mechanism exploits the global sensitivity of f over any two neighboring databases (differing at most one record), which is denoted as GSf(D). Given GSf(D), the Laplace mechanism ensures ϵ-differential privacy by injecting noise η into each value in the output of f(D) as

pdf(η)=ϵ2GSf(D)exp(|η|ϵGSf(D)) (2)

where η is drawn i.i.d. from Laplace distribution with zero mean and scale GSf(D)/ϵ.

Research in differential privacy has been significantly studied, from both the theoretical perspective, e.g., Chaudhuri and Monteleoni (2008b), Kifer and Machanavajjhala (2011), and the application perspective, e.g., data collection (Erlingsson et al. 2014), data streams (Chan et al. 2012), stochastic gradient descents (Song et al. 2013), recommendation (McSherry and Mironov 2009), regression (Chaudhuri and Monteleoni 2008b), online learning (Jain et al. 2012), publishing contingency tables (Xiao et al. 2010), and spectral graph analysis (Wang et al. 2013). The mechanisms of achieving differential privacy mainly include the classic approach of adding Laplacian noise (Dwork et al. 2006), the exponential mechanism (McSherry and Talwar 2007b), and the functional perturbation approach (Chaudhuri and Monteleoni 2008b).

2.1. Functional mechanism revisited

Functional mechanism (Zhang et al. 2012) is an extension of the Laplace mechanism. It achieves ϵ-differential privacy by perturbing the objective function fD(W) and then releasing the model parameter W¯ that minimizes the perturbed objective function f¯D(W) instead of the original one. The functional mechanism exploits the polynomial representation of fD(W). The model parameter W is a vector that contains d values W1, …, Wd. Let ϕ(W) denote a product of W1, …, Wd, namely, ϕ(W)=W1c1W2c2Wdcd for some c1,cd. Let Φj(j) denote the set of all products of W1, …, Wd with degree j, i.e., Φj={W1c1W2c2Wdcd|l=1dcl=j}. By the Stone-Weierstrass Theorem, any continuous and differentiable f(ti, W) can always be written as a polynomial of W1, …, Wd, for some J ∈ [0, ∞], i.e., f(ti,W)=j=0JϕΦjλϕtiϕ(W) where λϕti denotes the coefficient of ϕ(W) in the polynomial. Note that ti and xi are used exchangeably to indicate the data tuple i.

For instance, the polynomial expression of the loss function in the linear regression is as follows: f(xi,W)=(yixiTW)2=yi2j=1d(2yixij)Wj+1j,ld(xijxil)WjWl.

We can see that it only involves monomials in Φ0 = {1}, Φ1 = {W1, …, Wd}, and Φ2 = {WiWj|i, j ∈ [1, d]}. Each ϕ(W) has its own coefficient, e.g., for Wj, its polynomial coefficient λϕti=2yixij. Similarly, fD(W) can also be expressed as a polynomial of W1, …, Wd.

fD(W)=j=0JϕΦjtiDλϕtiϕ(w) (3)

Lemma 1 (Zhang et al. 2012) Let D and D′ be any two neighboring datasets. Let fD(W) and fD′(W) be the objective functions of regression analysis on D and D′, respectively. The following inequality holds

Δ=j=1JϕΦj2tiDλϕtitiDλϕti12maxtj=1JϕΦjλϕt1

Where ti, ti or t is an arbitrary tuple.

To achieve ϵ-differential privacy, fD(W) is perturbed by injecting Laplace noise Lap(Δϵ) into its polynomial coefficients λϕ, and then the model parameter W¯ is derived to minimize the perturbed function f¯D(W), where Δ=2maxtj=1JϕΦjλϕt1, according to the Lemma 1.

2.2. Convolutional deep belief networks

The basic convolutional restricted Boltzmann machine (CRBM) (Lee et al. 2009) consists of two layers: an input layer V and a hidden layer H (Fig. 1). The layer of hidden units consists of K groups, each of which is an NH × NH array of binary units. There are NH2K hidden units in total. Each group (in K groups) is associated with a NW × NW filter, where NW = NVNH + 1. The filter weights are shared across all the hidden units within the group. In addition, each group of hidden units has a bias bk, and all visible units share a single bias c. Training a CRBM is to minimize the following energy function E(v, h) as:

E(v,h)=k=1Ki,j=1NHr,s=1NWhijkWrskvi+r1,j+s1k=1Kbki,j=1NHhijkci,j=1NVvij (4)

Fig. 1.

Fig. 1

Convolutional restricted Boltzmann machine (CRBM)

Gibbs sampling can be applied using the following conditional distributions:

P(hijk=1|v)=σ((W˜k*v)ij+bk) (5)
P(vij=1|h)=σ((kWk*hk)ij+c) (6)

where σ is the sigmoid function.

The energy function given the dataset D is as follows:

E(D,W)=tDk=1Ki,j=1NHr,s=1NWhijk,tWrskvi+r1,j+s1ttDk=1Kbki,j=1NHhijk,tctDi,j=1NVvijt (7)

The max-pooling layer plays the role of a signal filter. By stacking multiple CRBMs on top of each other, we can construct a convolutional deep belief network (CDBN) (Lee et al. 2009). Regarding the softmax layer, we use the cross-entropy error function for a binomial prediction task. Let YT be a set of labeled data points used to train the model, the cross-entropy error function is given by

C(YT,θ)=i=1|YT|(yilogy^i+(1yi)log(1y^i)) (8)

where ‘T’ in YT is used to denote “training” data.

We can use the layer-wise unsupervised training algorithm (Bengio et al. 2007) and back-propagation to train CDBNs.

2.3. Chebyshev polynomials

In principle, many polynomial approximation techniques, e.g., Taylor Expansion, Bernoulli polynomial, Euler polynomial, Fourier series, Discrete Fourier transform, Legendre polynomial, Hermite polynomial, Gegenbauer polynomial, Laguerre polynomial, Jacobi polynomial, and even the state-of-the-art techniques in the twentieth century, including spectral methods and Finite Element methods (Harper 2012), can be applied to approximate non-linear energy functions used in CDBNs. However, figuring out an appropriate way to use each of them is non-trivial. First, estimating the lower and upper bounds of the approximation error incurred by applying a particular polynomial in deep neural networks is not straightforward; it is very challenging. It is significant to have a strong guarantee in terms of approximation errors incurred by the use of any approximation approach to ensure model utility in deep neural networks. In addition, the approximation error bounds must be independent of the number of data instances to guarantee the ability to be applied in large datasets without consuming excessive privacy budgets.

With these challenging issues, Chebyshev polynomial really stands out. The most important reason behind the usage of Chebyshev polynomial is that the upper and lower bounds of the error incurred by approximating activation functions and energy functions can be estimated and proved, as shown in the next section. Furthermore, these error bounds do not depend on the number of data instances, as we will present in Sect. 3.4. This is a substantial result when working with complex models, such as deep neural networks on large-scale datasets. In addition, Chebyshev polynomials are well-known, efficient, and widely used in many real-world applications (Mason and Handscomb 2002). Therefore, we propose to use Chebyshev polynomials in our work to preserve differential privacy in deep convolution belief networks.

The four kinds of Chebyshev polynomials can be generated from the two-term recursion formula:

Tk+1(x)=2xTk(x)Tk1(x),T0(x)=1 (9)

with different choices of initial values T1(x) = x, 2x, 2x − 1, 2x + 1.

According to the well-known result (Rivlin 1990), if a function f(x) is the Riemann integrable on interval [−1, 1], f(x) can be presented in a Chebyshev polynomial approximation as follows:

f(x)=k=0AkTk(x)=AX(T(x)) (10)

where Ak=2π11f(x)Tk(x)1x2dx,k,A=[12A0,,Ak,],Tk(x) is Chebyshev polynomial of degree k, X(T(x)) = [T0(x) … Tk(x) …].

The closed form expression for Chebyshev polynomials of any order is:

Ti(x)=j=0[i/2](1)j(i2j)xi2j(1x2)j (11)

where [i/2] is the integer part of i2.

3. Private convolutional deep belief network

In this section, we formally present our framework (Algorithm 1) to develop a convolutional deep belief network under ϵ-differential privacy. Intuitively, the algorithm used to develop dPAs can be applied to CDBNs. However, the main issue is that their approximation technique has been especially designed for cross-entropy error-based objective functions (Bengio 2009). There are many challenging issues in adapting their technique in CDBNs. The cross entropy error-based objective function is very different from the energy-based objective function (Eq. 7). As such: (1) It is difficult to derive its global sensitivity used in the functional mechanism, and (2) It is difficult to identify the approximation error bounds in CDBNs. To achieve private convolutional deep belief networks (pCDBNs), we figure out a new approach of using the Chebyshev Expansion (Rivlin 1990) to derive polynomial approximations of non-linear energy-based objective functions (Eq. 7), such that differential privacy can be preserved by leveraging the functional mechanism.

Our framework to construct the pCDBN includes four steps (Algorithm 1).

  • First, we derive a polynomial approximation of energy-based function E(D, W) (Eq. 7), using the Chebyshev Expansion. The polynomial approximation is denoted as E^(D,W).

  • Second, the functional mechanism is used to perturb the approximation function E^(D,W); the perturbed function is denoted as E¯(D,W). We introduce a new result of sensitivity computation for CDBNs. Next, we train the model to obtain the optimal perturbed parameters W¯ by using gradient descent. That results in private hidden layers, which are used to produce max-pooling layers. Note that we do not need to enforce differential privacy in max-pooling layers. This is because max-pooling layers play roles as signal filters only.

  • Third, we stack multiple pairs of a private hidden layer and a max-pooling layer (H, P) on top of each other to construct the private convolutional deep belief network (pCDBN).

  • Finally, we apply the technique presented in Phan et al. (2016c) to enforce differential privacy in the softmax layer for prediction and classification tasks.

Algorithm 1:

Private Convolutional Deep Belief Network

  • 1:

    Derive a polynomial approximation of the energy function E(D, W) (Eq. 7), denoted as E^(D,W)

  • 2:

    The function E^(D,W) is perturbed by using functional mechanism (FM) (Zhang et al. 2012), the perturbed function is denoted as E¯(D,W)

  • 3:

    Stack the private hidden and pooling layers

  • 4:

    4: By using the technique in Phan et al. (2016c), we derive and perturb the polynomial approximation of the softmax layer C^(θ) (Eq. 17), the perturbed function is denoted as C¯(θ), Return θ¯=argminθC¯(θ)

Let us first derive the polynomial approximation form of E(D, W) by applying Chebyshev Expansion, as follows.

3.1. Polynomial approximation of the energy function

There are two challenges in the energy function E(D, W) that prevent us from applying it for private data reconstruction analysis: (1) Gibbs sampling is used to estimate the value of every hijk; and (2) The probability of every hijk equal to 1 is a sigmoid function which is not a polynomial function with parameters Wk. Therefore, it is difficult to derive the sensitivity and error bounds of the approximation polynomial representation of the energy function E(D, W). Perturbing Gibbs sampling is challenging. Meanwhile, injecting noise in the results of Gibbs sampling will significantly affect the properties of hidden variables, i.e., values of hidden variables might be out of their original bounds, i.e., [0, 1].

To address this, we propose to preserve differential privacy in the model before applying Gibbs sampling. The generality is still guaranteed since Gibbs sampling is applied for all hidden units. In addition, we need to derive an effective polynomial approximation of the energy function, so that differential privacy preserving is feasible. First, we propose to consider the probability P(hijk=1|v)=σ((Wk*v)ij+bk) instead of hijk in the energy function E(D, W). The main goal of minimizing the energy function, i.e., “the better the reconstruction of v is, the better the parameters W are,” remains the same. Therefore, the generality of our proposed approach is still guaranteed. The energy function can be rewritten as follows:

E˜(D,W)=tD[k=1Ki,j=1NHr,s=1NWσ((Wk*vt)ij+bk)×Wrskvi+r1,j+s1tk=1Kbki,j=1NHσ((Wk*vt)ij+bk)ci,j=1NVvijt] (12)

As the sigmoid function σ(·) in neural networks satisfies the Reimann integrable condition (Vlcek 2012), it can be approximated by the Chebyshev series. We propose to derive a Chebyshev polynomial approximation function for the σ((Wk*vt)ij+bk) that results in a polynomial approximation function for our energy function E˜(). To make our sigmoid function satisfy the Riemann integrable condition on [−1, 1], we rewrite it as: σ((Wk*vt)ij+bkZijk) where Zijk is a local response normalization (LRN) term which can be computed as follows: Zijk=max(|(Wk*vt)ij+bk|,[q+αm=max(0,kl/2)min(K1,k+1/2)((Wm*vt)ij+bm)2]β), where the constants q, l, α, and β are hyper-parameters, K is the total number of feature maps. As in Krizhevsky et al. (2012), we used q = 2, l = 5, α = 10−4, and β = 0.75 in our experiments.

From Eq. (10), the Chebyshev polynomial approximation of our sigmoid function is as follows:

σ((Wk*vt)ij+bkZijk)=l=0AlTl((Wk*vt)ij+bkZijk) (13)

where Al and Tl can be computed using Eqs. (10) and (11).

Now, there is still a challenge that prevents us from applying the functional mechanism to preserve differential privacy in applying Eq. (13): The equation involves an infinite summation. To address this problem, we remove all orders greater than L. Based on the Chebyshev series, the polynomial approximation of the energy function E˜() can be written as:

E^(D,W)=tD[k=1Ki,j=1NHr,s=1NW(l=0LAlTl((Wk*vt)ij+bkZijk))×Wrskvi+r1,j+s1tk=1Kbki,j=1NHl=0LAlTl((Wk*vt)ij+bkZijk)ci,j=1NVvijt] (14)

E^() is a polynomial approximation function of the original energy function E(·) in Eq. 7. Furthermore, the term l=0LAlTl((Wk*vt)ij+bkZijk) can be rewritten as: l=0Lαl((Wk*vt)ij+bkZijk)l, where α are the Chebyshev polynomial coefficients. For instance, given L = 7, we have l=0L=7AlTl(X)=125(5X7+21X535X3+35X+16), where X=(Wk*vt)ij+bkZijk.

3.2. Perturbation of objective functions

We employ the functional mechanism (Zhang et al. 2012) to perturb the objective function E^() by injecting Laplace noise into its polynomial coefficients. The hidden layer contains K groups of hidden units. Each group is trained with a local region of input neurons, which will not be merged with each other in the learning process. Therefore, it is not necessary to aggregate sensitivities of the training algorithm in K groups to the sensitivity of the function E^(). Instead, the sensitivity of the function E^() can be considered the maximal sensitivity given any single group. As a result, the sensitivity of the function E^() can be computed in the following lemma.

Lemma 2 Let D and Dbe any two neighboring datasets. Let E^(D,W) and E^(D,W) be the objective functions of regression analysis on D and D′, respectively. α are Chebyshev polynomial coefficients. The following inequality holds:

Δ2maxt,ki,j=1NHl=0L|αl|[(r,s=1NWvij,rst,k+1Zijk)l+r,s=1NW(r,s=1NWvij,rst,k+1Zijk)l|vij,rst,k|]+i,j=1NV|vijt| (15)

Proof By replacing Wrst (i.e., ∀r, s), bk, and c in E^(D,W) with 1, we have the function with only polynomial coefficients of E^(D,W), denoted λϕD. We have that

λϕD=tDλϕt

where

λϕt=k=1Ki,j=1NHr,s=1NW(l=0Lαl(r,s=1NWvij,rst+1Zijk))vij,rstk=1Kbki,j=1NHl=0Lαl(r,s=1NWvij,rst+1Zijk)i,j=1NVvijt

The E^()s sensitivity can be computed as follows:

Δ=tiDλϕtitiDλϕti12maxtλϕt12maxtk=1Ki,j=1NHl=0L|αl|[(r,s=1NWvij,rst,k+1Zijk)l+r,s=1NW(r,s=1NWvij,rst,k+1Zijk)l|vij,rst,k|]+i,j=1NV|vijt| (16)

The current sensitivity is an aggregation of sensitivities from all K groups of hidden units. However, each of them is trained with a local region of input neurons, which will not be merged with the others in the learning process. Therefore, the sensitivity of the function E^() can be considered the maximal sensitivity given any single group of hidden units in a hidden layer. From Eq. (16), the final sensitivity of the function E^() is as follows:

Δ2maxt,ki,j=1NHl=0L|αl|[(r,s=1NWvij,rst,k+1Zijk)l+r,s=1NW(r,s=1NWvij,rst,k+1Zijk)l|vij,rst,k|]+i,j=1NV|vijt|

Consequently, the Eq. (15) holds.

We use gradient descent to train the perturbed model E¯(). That results in private hidden layers. To construct a private convolutional deep belief network (pCDBN), we stack multiple private hidden layers and max-pooling layers on top of each other. The pooling layers only play the roles of signal filters of the private hidden layers. Therefore, there is no need to enforce privacy in max-pooling layers.

3.3. Perturbation of softmax layer

On top of the pCDBN, we add an output layer, which includes a single binomial variable to predict Y. The output variable y^ is fully linked to the hidden variables of the highest hidden (pooling) layer, denoted p(o), by weighted connections W(o), where o is the number of hidden (pooling) layers in the CDBNs. We use the logistic function as an activation function of y^, i.e., y^=σ(W(o)p(o)). Cross-entropy error, which is a typical objective function in deep learning (Bengio 2009), is used as a loss function. The cross-entropy error function has been widely used and applied in real-world applications (Bengio 2009). Therefore, it is critical to preserve differential privacy under the use of the cross-entropy error function. However, other loss functions, e.g., square errors, can be applied in the softmax layer as well. Let YT be a set of labeled data points used to train the model, the cross-entropy error function is given by:

C(YT,θ)=i=1|YT|(yilog(1+eW(o)pi(0))+(1yi)log(1+eW(o)pi(0))) (17)

By applying the technique in Phan et al. (2016c), based on Taylor Expansion (Arfken 1985), we can derive the polynomial approximation of the cross-entropy error function as follows:

C^(YT,θ)=i=1|YT|l=12R=02fl(R)(0)R!(W(o)pi(o))R (18)

where

g1(ti,W(o))=W(o)pi(o),g2(ti,W(o))=W(o)pi(o)f1(z)=yilog(1+ez),f2,(x)=(1yi)log(1+ez)

To preserve the differential privacy, the softmax layer is perturbed by using the functional mechanism (Phan et al. 2016c; Zhang et al. 2012). The sensitivity of the softmax layer, ΔC, is estimated as ΔC=|p(o)|+14|p(o)|2 (Phan et al. 2016c).

3.4. Approximation error bounds

The following lemma shows how much error our approximation approaches incur. The average error of the approximations is always bounded, as presented in the following lemma:

Lemma 3 Approximation Error bounds. Let SL(E)=E(D,W)E^(D,W), UL(E) = ∥E(D, W) – E* (D, W)∥ where E(D, W) is the target function, E^(D,W) is the approximation function learned by our model, and E*(D, W) is the best uniform approximation. The lower and upper bounds of the sum square error are as follows:

(4+4π2logL)NH2K×UL(E)>SL(E)UL(E)π4NH2K|AL+1| (19)

Proof As the well-known results in Rivlin (1990), given a target sigmoid function σ, a polynomial approximation function σ^ learned by the model, and the best uniform approximation of σ, SL(σ)=σσ^,UL(σ)=σσ*, we have that:

SL(σ)UL(σ)π4|AL+1| (20)

Since there are NH2K hidden units in our pCDBN model, we have SL(E)UL(E)π4NH2K|AL+1|. Similarly, in Rivlin (1990), we also have

UL(σ)SL(σ)<(4+4π2logL)UL(σ) (21)

Since there are NH2K hidden units in our pCDBN model, we have

(4+4π2logL)NH2K×UL(E)>SL(E)UL(E) (22)

Therefore, the Eq. (19) holds.

The approximation error depends on the structure of the energy function E(D, W), i.e., the number of hidden neurons NH2K and |AL+1|, and the number of attributes of the dataset. Lemma 3 can be used to determine when it should stop learning the approximation model. For each group of NH2K hidden units, the upper bound of the sum square error is only π4NH2|AL+1|, i.e., |AL+1| is tiny when L is large enough.

Importantly, Lemmas 2 and 3 show that the sensitivity Δ and the approximation error bounds of the energy-based function are entirely independent of the number of data instances. This sufficiently guarantees that our differential privacy preserving framework can be applied in large datasets without consuming excessive privacy budgets. This is a substantial result when working with complex models, such as deep neural networks on large-scale datasets. It is worth noting that non-linear activation functions, which are continuously differentiable [Stone-Weierstrass Theorem (Rudin 1976)] and satisfy the Riemann-integrable condition, can be approximated by using Chebyshev Expansion. Therefore, our framework can be applied given such activation functions as, e.g., tanh, arctan, sigmoid, softsign, sinusoid, sinc, Gaussian, etc. (Wikipedia 2016). In the experiment section, we will show that our approach leads to accurate results.

Note that the proofs of Lemmas 2 and 3 do not depend on the assumption of the data features being non-negative, and that the target follows by a binomial distribution. The proofs are generally applicable for inputs and the target, which are not restricted by any constraint. As shown in the next section, our approach efficiently works with a multi-class classification task on the MNIST dataset (Lecun et al. 1998). The cross-entropy error function is applied in the softmax layer.

4. Experiments

To validate our approach, we have conducted an extensive experiment on well-known and large-scale datasets, including a health social network, YesiWell data (Phan et al. 2016c), and a handwriting digit dataset, MNIST (Lecun et al. 1998). Our task of validation focuses on four key issues: (1) The effectiveness and robustness of our pCDBN model; (2) The effects of our model and hyper-parameter selections, including the use of Chebyshev polynomial, the impact of the polynomial degree L, and the effect of probabilities P(hijk=1|v) in approximating the energy function; (3) The ability to work on large-scale datasets of our model; and (4) The benefits of being independent of the number of training epochs in consuming privacy budget.

We carry out the validation through three approaches. One is by conducting the human behavior prediction with various settings of data cardinality, privacy budget ϵ, noisy vs. noiseless models, and original versus approximated models. By this we rigorously examine the effectiveness of our model compared with the state-of-the-art algorithms, i.e., Phan et al. (2016c) and Abadi et al. (2016). The second approach is to discover gold standards in our model configuration by examining various settings of hyper-parameters. The third approach is to access the benefits of being independent of the number of training epochs in terms of consuming privacy budget of our pCDBN model. In fact, we present the prediction accuracies of our pCDBN and existing algorithms as a function of the number of training epochs.

4.1. Human behavior modeling

In this experiment, we have developed a private convolutional deep belief network (pCDBN) for human behavior prediction and classification tasks in the YesiWell health social network (Phan et al. 2016c).

Health social network data To be able to compare our model with the state-of-the-art deep private auto-encoders for human behavior prediction (dPAH), we use the same dataset used in Phan et al. (2016c). Data were collected from Oct 2010 to Aug 2011 as a collaboration between PeaceHealth Laboratories, SK Telecom Americas, and the University of Oregon to record daily physical activities, social activities (text messages, competitions, etc.), biomarkers, and biometric measures (cholesterol, BMI, etc.) for a group of 254 overweight and obese individuals. Physical activities, including information about the number of walking and running steps, were reported via a mobile device carried by each user. All users enrolled in an online social network, allowing them to friend and communicate with each other. Users’ biomarkers and biometric measures were recorded via daily/weekly/monthly medical tests performed at home individually or at our laboratories.

In total, we consider three groups of attributes:

  • Behaviors: #competitions joined, #exercising days, #goals set, #goals achieved, ∑(distances), avg(speeds);

  • #Inbox Messages: Encouragement, Fitness, Followup, Competition, Games, Personal, Study protocol, Progress report, Technique, Social network, Meetups, Goal, Wellness meter, Feedback, Heckling, Explanation, Invitation, Notice, Technical fitness, Physical;

  • Biomarkers and Biometric Measures: Wellness Score, BMI, BMI slope, Wellness Score slope.

pCDBN for human behavior modeling Our starting observation is that a human behavior is the outcome of behavior determinants such as self-motivation, social influences, and environmental events. This observation is rooted in human agency in social cognitive theory (Bandura 1989). In our model, these human behavior determinants are combined together to model human behaviors. Given a tuple ti, xi1, …, xid are the personal attributes and yi is a binomial parameter that indicates whether a user increases or decreases his/her exercises. To describe the pCDBN model, we adjust the notations xi1 and yi a little bit to denote the temporal dimension, and our social network information. Specifically, xut={x1ut,,xdut} is used to denote the d attributes of user u at time point t. Meanwhile, yut is used to denote the status of the user u at time point t. yut=1 denotes u increases exercises at time t; otherwise yut=0.

In fact, the current behavior at time-stamp t of a user u is conditional on his/her behavior in the past N time-stamps, i.e., tN, …, t − 1. To model this effect [i.e., also considered as a form of self-motivation in social cognitive theory (Bandura 1989)], we first aggregate his personal attributes in the last N timestamps into a d × N matrix, which will be considered the visible input V. Then, to model self-motivation and social influence, we add an aggregation of his/her attributes and the effects from his/her friends at the current timestamp t into the dynamic biases, i.e., b^kt and c^t, of the hidden and visible units (Eqs. 2326). The hidden and visible variables at time t are

hi,j,tk=σ((W˜k*vt)ij+b^kt) (23)
vi,j,t=σ((kWk*hk)ijt+c^t) (24)

where

b^kt=bk+e=1dBekxeut+ηk|Fu|vFuψt(v,u) (25)
c^kt=c+e=1dAexeut+ηk|Fu|vFuψt(v,u) (26)

where b^kt and c^t are dynamic biases, Bek is a matrix of weights which connects xut with hidden variables in the group k. ψt(v, u) is the probability that v influences u on physical activity at time t. ψt(v, u) is derived from the TaCPP model (Phan et al. 2016b). Fu is a set of friends of u in the social network. η and ηk are parameters which present the ability to observe the explicit social influences from neighboring users.

The model includes two hidden layers. We trained 10 first layer bases, each 4 × 12 variables v, and 10 second layer bases, each 2 6. The pooling ratio was 2 for both layers. In our work, contrastive divergent algorithm (Hinton 2002) was used to optimize the energy function, and back-propagation was used to optimize the cross-entropy error function in the softmax layer. The implementations of our models using Tensorflow1 and Python were made publicly available on GitHub.2 The results and algorithms can be reproduced on either a single workstation or a Hadoop cluster. To examine the effectiveness of our pCDBN, we established two experiments, i.e., prediction and classification, as follows.

4.1.1. Human behavior prediction

Experimental setting Our pCDBN model is used to predict the statuses of all the users in the next time point t + 1 given M past time points tM+1,,t. The model has been trained on daily and weekly datasets. Both datasets contain 300 time points, 30 attributes, 254 users, 2766 messages, 1383 friend connections, 11,458 competitions, etc. For each dataset, we have, in total, 254 users ×300 timestamps = 76, 200 data points.

The number of previous time intervals N is set to 4. N is used as a time window to generate training samples. For instance, given 10days of data (M=10), a time window of 4days N = 4, and d data features, e.g., BMI, #steps, etc., a single input V will be a d × N(= d × 4) matrix. A single input V is considered as a data sample to model human behavior in our prediction model. If we move the window N on 10days of data, i.e., M, we will have MN+1 training samples for each individual, i.e., 10 − 4 + 1 = 7 in this example. So, we have, in total, 254(MN+1)=254×7=1, 778 training samples for every 10days of data M to predict whether an individual will increase physical activity in the next day t + 1.

The Chebyshev polynomial approximation degree L and learning rates are set to 7 and 10−3. To avoid over-fitting, we apply the L1-regularization and the dropout technique (Srivastava et al. 2014), i.e., the dropout probability is set to 0.5. Regarding K-fold cross-validation or bootstrapping, it is either unnecessary or impractical to apply them in deep learning, and particularly in our study (Bengio 2017; Reed et al. 2014). This is because: (1) It is too expensive and time consuming to train K deep neural networks, each of which usually has a large number of parameters, e.g., hundreds of thousands of parameters (Bengio 2017); and (2) Bootstrapping is only used to train neural networks when class labels may be missing, objects in the image may not be localized, and in general, the labeling may be subjective, noisy, and incomplete (Reed et al. 2014). This is out of the scope of our focus. Our models were trained on a graphic card NVIDIA GTX TITAN X, 12 GB RAM with 3072 CUDA cores.

Competitive models We compare our pCDBN with two types of state-of-the-art models, as follows:

  • (a)

    Deep learning models for human behavior prediction, including: (1) The original convolutional deep neural network (CDBN) for human behavior prediction without enforcing differential privacy; (2) The truncated version of the CDBNs, in which the energy function is approximated without injecting noise to preserve differential privacy, denoted TCDBN; and (3) The conditional Restricted Boltzmann Machine, denoted SctRBM (Li et al. 2014). None of these models enforces ϵ-differential privacy.

  • (b)

    Deep Private Auto-Encoder (dPAH) (Phan et al. 2016c), which is the state-of-the-art deep learning model under differential privacy for human behavior prediction. The dPAH model outperforms general methods for regression analysis under ϵ-differential privacy, i.e., functional mechanism (Zhang et al. 2012), DPME (Lei 2011), and filter-priority (Cormode 2011). Therefore, we only compare our model with the dPAH.

Accuracy versus dataset cardinality Fig. 2 shows the prediction accuracy of each algorithm as a function of the dataset cardinality. We vary the size of M, which also can be considered as the sampling rate of the dataset. ϵ is 1.0 in this experiment. In both datasets, there is a gap between the prediction accuracy of pCDBN and the original convolutional deep belief network (CDBN). However, the gap dramatically gets smaller with the increase of the dataset cardinality (M). In addition, our pCDBN outperforms the state-of-the-art dPAH in most of the cases, and the results are statistically significant (p = 5.3828e−05, performed by paired t test). It also is significantly better than the SctRBM when the sampling rate goes just a bit higher, i.e., > 0.2 or > 0.3 (p = 8.8350e−04, performed by paired t test). Either 0.2 or 0.3 is a small sampling rate; thus, this is a remarkable result.

Fig. 2.

Fig. 2

Prediction accuracy versus dataset cardinality

Accuracy versus privacy budget Fig. 3 illustrates the prediction accuracy of each model as a function of the privacy budget ϵ. M is set to 12 ≈ 0.32%. The prediction accuracies of privacy non-enforcing models remain unchanged for all ϵ. Since a smaller ϵ requires a larger amount of noise to be injected, privacy enforcing models incur higher inaccurate prediction results when ϵ decreases. pCDBN outperforms dPAH in all cases, and the results are statistically significant (p = 2.7266e−12, performed by paired t test). In addition, it is relatively robust against the change of ϵ. In fact, the pCDBN model is competitive even with privacy non-enforcing models, i.e., SctRBM.

Fig. 3.

Fig. 3

Prediction accuracy versus privacy budget ϵ

Probabilities P(hijk=1|v) and Gibbs sampling To approximate the energy function E(D, W), we propose to use the probabilities P(hijk=1|v) instead of the values of hijk, which are estimated by applying Gibbs Sampling on the P(hijk=1|v). To illustrate the effect of our approach, we conducted both theoretical analysis and experimental evaluations as follows. Let’s use hijk to estimate the sensitivity Δ of the energy function E(D, W) (Eq. 7) by following Lemma 1 as follows:

Δ=2maxt,ki,j=1NHr,s=1NW|hijk,jvi+r1,j+s1t|+i,j=1NH|hijk,t|+i,j=1NV|vijt| (27)

There are several issues in the Eq. (27) that prevent us from applying it. First, hijk cannot be considered an observed variable, since its value can only be estimated by applying Gibbs sampling from observed variables v and parameters W. In other words, the value of Δ is significantly dependent on Gibbs sampling given P(hijk=1|v). Therefore, Δ can be uncertain in every sampling step. That may lead to a violation of the guarantee of privacy protection under a differential privacy mechanism. To address this issue, one may set all the hidden variables hijk to 1. That leads to the use of a maximal value of the sensitivity Δ as follows:

Δ=2maxt,ki,j=1NHr,s=1NW|vi+r1,j+s1t|+NH2+i,j=1NV|vijt| (28)

The maximal value of Δ (Eq. 28) is huge and is not an optimal bound. In other words, the model efficiency will be affected, since too much noise will be unnecessarily injected into the model.

To tackle this challenge, our solution is to consider the probabilities P(hijk=1|v) instead of hijk. As a result, the sensitivity Δ in Lemma 2 is only dependent on observed variables v instead of Gibbs samplings. That leads to a smaller amount of noise injected into the model. To demonstrate the effect of this approach, our model is compared with its truncated version, in which the energy function is approximated without injecting noise to preserve differential privacy, denoted TCDBN. Experimental results illustrated in Figs. 2 and 3 show that the impact of our approach on the original model CDBN is marginal in terms of prediction accuracy. On average, the prediction accuracy is only less than 1% lower compared with the original model. This is a practical result.

4.1.2. Human behavior classification

In this experiment, we aim to examine: (1) The robustness of our approach when it is trained with a large number of epochs at different noise levels; and (2) The effectiveness of different approximation approaches, including Chebyshev, Taylor, and Piecewise approximations. Our experiment setting is as follows:

We consider every pair (u, t) is a data point. Given t is a week, we have, in total, 9652 data points (254 users × 38weeks). We randomly select 10% data points as a testing set, and the remaining data points are used as a training set. At each training step, the model is trained with 111 randomly selected data points, i.e., batch size = 111. To avoid the imbalance in the data, each training batch consists of a balanced number of data samples from different data classes. With this technique, data points in the under-represented class can be incidentally sampled more than the others (Brownlee 2015). The model is used to classify the statuses of all the users given their features. In this experiment, we compare our model with state-of-the-art polynomial approximation approaches in digital implementations, including truncated Taylor Expansion: σ(x)=tanhxxx33+2x515 (Lee and Jeng 1998; Vlcek 2012) (pCDBN_TE), and linear piecewise approximation: σ(x) ≈ c1x + c2 (Armato et al. 2009) (pCDBN_PW). Other baseline models, i.e., dPAH and SctRBM, cannot be directly applied to this task; so, we do not include them in this experiment.

Figure 4 shows classification accuracies for different levels of privacy budget ϵ. Each plot illustrates the evolution of the testing accuracy of each algorithm and its power fit curve as a function of the number of epochs. After 600 epochs, our pCDBN can achieve 88% with ϵ = 0.1, 92% with ϵ = 2, and 94% with ϵ = 8. In addition, our model outperforms baseline approaches, i.e., pCDBN_TE and pCDBN_PW, and the results are statistically significant (p = 4.4293e−07, performed by paired t test). One of the important observations we acquire from this result is that: The Chebyshev polynomial approximation is more effective than the competitive approaches in preserving differential privacy in convolutional deep belief networks. One of the reasons is that Chebyshev polynomial approximation incurs fewer errors than the other two approaches (Harper 2012; Vlcek 2012). Similar to Layer-wise Relevance Propagation (Bach et al. 2015), the approximation errors will propagate across neural layers. Therefore, the smaller the error, the more accurate the models will be.

Fig. 4.

Fig. 4

Results of classification accuracy for different noise levels, different approximation approaches, and the number of training epochs

Note that our observations (i.e., data points) in the YesiWell data are not strictly independent. Therefore, the simple use of paired t test may not give rigorous conclusions. However, the very small p values under the paired t test can still indicate the significant improvement of our approach over baselines.

4.2. Handwriting digit recognition

To further demonstrate the ability to work on large-scale datasets, we conducted additional experiments on the well-known MNIST dataset (Lecun et al. 1998). The MNIST database of handwritten digits consists of 60,000 training examples, and a test set of 10,000 examples (Lecun et al. 1998). Each example is a 28 × 28 size gray-level image. The MNIST dataset is completely balanced, with 6000 images for each category, with 10 categories in total.

We compare our model with the private stochastic gradient descent algorithm, denoted pSGD, from Abadi et al. (2016). The pSGD is the state-of-the-art algorithm in preserving differential privacy in deep learning. pSGD is an advanced version of Shokri and Shmatikov (2015); therefore, there is no need to include the work proposed by Shokri and Shmatikov (2015) in our experiments. The two approaches, i.e., our proposed algorithm and the pSGD, are built on the same structure of a convolutional deep belief network. As in prior work (Abadi et al. 2016), two convolution layers, one with 32 features and one with 64 features, and each hidden neuron which connects with a 5 × 5 unit patch are applied. On top of the convolution layers, there are a fully-connected layer with 1024 units, and a softmax of 10 classes (corresponding to the 10 digits) with cross-entropy loss.

Figure 5a illustrates the prediction accuracies of each algorithm as a function of the privacy budget ϵ. We can see that our model pCDBN outperforms the pSGD in terms of prediction accuracies with small values of the privacy budget ϵ, i.e., ϵ ≤ 1.0. This is a substantial result, since smaller values of ϵ enforce a stronger privacy guarantee of the model. With higher values of ϵ (>1.0), i.e., small injected noise, the two models converge to similar prediction accuracies.

Fig. 5.

Fig. 5

Accuracy for different noise levels on the MNIST dataset

Figure 5b demonstrates the benefit of being independent of the number of training epochs in consuming the privacy budget of our mechanism. In this experiment, ϵ is set to 0.5, i.e., large injected noise. The pSGD achieves higher prediction accuracies after using a small number of training epochs, i.e., 88.75% after 18 epochs, compared with the pCDBN. More epochs cannot be used to train the pSGD, since it will violate the privacy protection guarantee. Meanwhile, our model, the pCDBN, can be trained with an unlimited number of epochs. After a certain number of training epochs, i.e., 162 epochs, the pCDBN outperforms the pSGD in terms of prediction accuracy, with 91.71% compared with 88.75%.

Our experimental results clearly show the ability to work with large-scale datasets using our mechanism. In addition, it is significant to be independent of the number of training epochs in consuming privacy budget ϵ. Our mechanism is the first of its kind offering this distinctive ability.

The impact of polynomial degree L Figure 6 shows the prediction accuracies of our model by using different values of L on the MNIST dataset (Lecun et al. 1998). After a certain number of training epochs, it is clear that the impact of L is not significant when L is larger than or equal to 3. In fact, the models with L ≥ 3 converge to similar prediction accuracies after 162 training epochs. The difference is notable with small numbers of training epochs. With L larger than 7, the prediction accuracies are very much the same. Therefore we did not show them in Fig. 6. Our observation can be used as a gold standard in selecting L when approximating energy functions based on Chebyshev polynomials.

Fig. 6.

Fig. 6

The impact of different values of L on the MNIST dataset

Computational performance Given the MNIST dataset, it takes an average of 761 seconds to train our model, after 162 epochs, by using a GPU (NVIDIA GTX TITAN X, 12GB RAM with 3072 CUDA cores). Meanwhile, training the pSGD is faster than our model, since only a small number of training epochs is needed to train the pSGD. On average, training the pSGD takes 86s, after 18 training epochs. For the YesiWell dataset, training our pCDBN model takes an average of 2910s, after 600 epochs, compared with 2141s of the dPAH model.

5. Conclusions and discussions

In this paper, we propose a novel framework for developing convolutional deep belief networks under differential privacy. Our approach conducts both sensitivity analysis and noise insertion on the energy-based objective functions. Distinctive characteristics offered by our model include: (1) It is totally independent of the number of training epochs in consuming privacy budget; (2) It has the potential to be applied in typical energy-based deep neural networks; (3) Non-linear activation functions, which are continuously differentiable [Stone-Weierstrass Theorem (Rudin 1976)] and satisfy the Riemann-integrable condition, e.g., tanh, arctan, sigmoid, softsign, sinusoid, sinc, Gaussian, etc. (Wikipedia 2016), can be applied; and (4) It has the ability to work with large-scale datasets. With these fundamental abilities, our framework could significantly improve the applicability of differential privacy preservation in deep learning. To illustrate the effectiveness of our framework, we propose a novel model based on our private convolutional deep belief network (pCDBN), for human behavior modeling. Experimental evaluations on a health social network, YesiWell data, and a handwriting digit dataset, MNIST data, validate our theoretical results and the effectiveness of our approach.

In future work, it is worthwhile to study how we might be able to extract private information from deep neural networks. We will also examine potential approaches to preserve differential privacy in more complex deep learning models, such as Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber 1997). Another open direction is how to adapt our framework to multiparty computational settings, in which multiple parties can jointly train a deep learning model under differential privacy. Innovative multiparty computational protocols for deep learning under differential privacy must have the ability to work with large-scale datasets.

In principle, our mechanism can be applied on rectified linear units (ReLUs) (Glorot et al. 2011) and on parametric rectified linear units (PReLUs) (He et al. 2015). The main difference is that we do not need to approximate the energy function. This is because the energy function is a polynomial function when applying ReLU units. However, we need to add a local response normalization (LRN) layer (Krizhevsky et al. 2012) to bound the values of hidden neurons. This is a common step when dealing with ReLU units. The implementation of this layer and ReLU units under differential privacy is an exciting opportunity for other researchers in future work.

Another challenging problem is identifying the exact risk of re-identification/reconstruction of the data under differential privacy. In Lee and Clifton (2012), the authors proposed differential identifiability to link individual identifiability to ϵ differential privacy. However, this is still a non-trivial question. A fancy solution is to design innovative approaches to reconstruct original models from noisy deep neural networks. Then, one could use the original models to infer sensitive information in the training data. However, how to reconstruct the original models from differentially private deep neural networks is an open question. Of course, it is very challenging and will require a significant effort of the whole community to answer.

Acknowledgements

This work is supported by the NIH Grant R01GM103309 to the SMASH project. Wu is also supported by NSF Grant 1502273 and 1523115. Dou is also supported by NSF Grant 1118050. We thank Xiao Xiao and Rebeca Sacks for their contributions.

Footnotes

References

  1. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, & Zhang L (2016). Deep learning with differential privacy. arXiv:1607.00133. [Google Scholar]
  2. Arfken G (1985). Mathematical methods for physicists (3rd ed). Cambridge: Academic Press. [Google Scholar]
  3. Armato A, Fanucci L, Pioggia G, & Rossi DD (2009). Low-error approximation of artificial neuron sigmoid function and its derivative. Electronics Letters, 45(21), 1082–1084. [Google Scholar]
  4. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, & Samek W (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7), e0130, 140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bandura A (1989). Human agency in social cognitive theory. The American Psychologist, 44(9), 1175. [DOI] [PubMed] [Google Scholar]
  6. Bengio Y (2009). Learning deep architectures for AI. Foundation and Trends in Machine Learning, 2(1), 1–127. doi: 10.1561/2200000006. [DOI] [Google Scholar]
  7. Bengio Y (2017). Is cross-validation heavily used in deep learning or is it too expensive to be used? Quora. https://wwwquoracom/Is-cross-validation-heavily-used-in-Deep-Learning-or-is-it-too-expensive-to-be-used.
  8. Bengio Y, Lamblin P, Popovici D, Larochelle H, Montreal UD,& Quebec M (2007). Greedy layer-wise training of deep networks In NIPS. [Google Scholar]
  9. Brownlee J (2015). 8 tactics to combat imbalanced classes in your machine learning dataset. http://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/.
  10. Chan THH, Li M, Shi E, & Xu W (2012). Differentially private continual monitoring of heavy hitters from distributed streams In PETS’12 (pp. 140–159). [Google Scholar]
  11. Chaudhuri K, & Monteleoni C (2008a). Privacy-preserving logistic regression In NIPS (pp. 289–296). [Google Scholar]
  12. Chaudhuri K, & Monteleoni C (2008b). Privacy-preserving logistic regression In NIPS’08 (pp. 289–296). [Google Scholar]
  13. Cheng Y, Wang F, Zhang P, & Hu J (2016). Risk prediction with electronic health records: A deep learning approach In SDM’16. [Google Scholar]
  14. Choi E, Schuetz A, Stewart WF, & Sun J (2016). Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association,. doi: 10.1093/jamia/ocw112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cormode G (2011). Personal privacy vs population privacy: Learning to attack anonymization In KDD’11 (pp. 1253–1261). [Google Scholar]
  16. Dowlin N, Gilad-Bachrach R, Laine K, Lauter K, Naehrig M, & Wernsing J (2016). Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of the 33rd international conference on machine learning, PMLR, proceedings of machine learning research (Vol. 48, pp. 201–210). [Google Scholar]
  17. Dwork C, & Lei J (2009). Differential privacy and robust statistics In STOC’09 (pp. 371–380). [Google Scholar]
  18. Dwork C, McSherry F, Nissim K, & Smith A (2006). Calibrating noise to sensitivity in private data analysis. Theory of Cryptography, 3876, 265–284. [Google Scholar]
  19. Erlingsson U, Pihur V, & Korolova A (2014). Rappor: Randomized aggregatable privacy-preserving ordinal response In CCS’14 (pp. 1054–1067). [Google Scholar]
  20. Fang R, Pouyanfar S, Yang Y, Chen SC, & Iyengar SS (2016). Computational health informatics in the big data age: A survey. ACM Computing Surveys, 49(1), 12:1–12:36. doi: 10.1145/2932707. [DOI] [Google Scholar]
  21. Glorot X, Bordes A, & Bengio Y (2011). Deep sparse rectifier neural networks In Aistats (Vol. 15, p. 275). [Google Scholar]
  22. Gottlieb A, Stein GY, Ruppin E, Altman RB, & Sharan R (2013). A method for inferring medical diagnoses from patient similarities. BMC Medicine, 11(1), 194. doi: 10.1186/1741-7015-11-194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Harper T (2012). A comparative study of function approximators involving neural networks. Thesis, Master of Science, University of Otago. http://hdl.handle.net/10523/2397. [Google Scholar]
  24. Hay M, Rastogi V, Miklau G, & Suciu D (2010). Boosting the accuracy of differentially private histograms through consistency. Proceedings of the VLDB Endowment, 3(1), 1021–1032. [Google Scholar]
  25. He K, Zhang X, Ren S, & Sun J (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR abs/1502.01852. http://arxiv.org/abs/1502.01852.
  26. Helmstaedter M, Briggman KL, Turaga SC, Jain V, Seung HS, & Denk W (2013). Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461), 168–174. [DOI] [PubMed] [Google Scholar]
  27. Hinton G (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771–1800. [DOI] [PubMed] [Google Scholar]
  28. Hinton GE, Osindero S, & Teh YW (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554. doi: 10.1162/neco.2006.18.7.1527. [DOI] [PubMed] [Google Scholar]
  29. Hochreiter S, & Schmidhuber J (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. doi: 10.1162/neco.1997.9.8.1735. [DOI] [PubMed] [Google Scholar]
  30. Jain P, Kothari P, & Thakurta A (2012). Differentially private online learning In COLT’12 (pp. 24.1–24.34). [Google Scholar]
  31. Jamoom EW, Yang N, & Hing E (2016). Adoption of certified electronic health record systems and electronic information sharing in physician offices: United states, 2013 and 2014. NCHS Data Brief, 236, 1–8. [PubMed] [Google Scholar]
  32. Kifer D, & Machanavajjhala A (2011). No free lunch in data privacy In SIGMOD’11 (pp. 193–204). [Google Scholar]
  33. Krizhevsky A, Sutskever I, & Hinton GE (2012). Imagenet classification with deep convolutional neural networks In Advances in neural information processing systems (pp. 1097–1105). [Google Scholar]
  34. LeCun Y, Bengio Y, & Hinton G (2015). Deep learning. Nature, 521(7553), 436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  35. Lecun Y, Bottou L, Bengio Y, & Haffner P (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. doi: 10.1109/5.726791. [DOI] [Google Scholar]
  36. Lee J, & Clifton C (2012). Differential identifiability. In The 18th ACM SIGKDD international conference on knowledge discovery and data mining, KDD ‘12, Beijing, China, 12–16 August 2012 (pp. 1041–1049). [Google Scholar]
  37. Lee T, & Jeng J (1998). The chebyshev-polynomials-based unified model neural networks for function approximation. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 28(6), 925–935. [DOI] [PubMed] [Google Scholar]
  38. Lee H, Grosse R, Ranganath R, & Ng AY (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations In ICML’09 (pp. 609–616). [Google Scholar]
  39. Lei J (2011). Differentially private m-estimators In NIPS (pp. 361–369). [Google Scholar]
  40. Leung MKK, Xiong HY, Lee LJ, & Frey BJ (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics, 30(12), i121–i129. doi: 10.1093/bioinformatics/btu277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Li H, Li X, Ramanathan M, & Zhang A (2015). Prediction and informative risk factor selection of bone diseases. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 12(1), 79–91. doi: 10.1109/TCBB.2014.2330579. [DOI] [PubMed] [Google Scholar]
  42. Li X, Du N, Li H, Li K, Gao J, & Zhang A (2014). A deep learning approach to link prediction in dynamic networks In SIAM’14 (pp. 289–297). [Google Scholar]
  43. Liu S, Liu S, Cai W, Pujol S, Kikinis R, & Feng D (2014). Early diagnosis of Alzheimer’s disease with deep learning. In IEEE 11th international symposium on biomedical imaging, ISBI 2014, Beijing, China (pp. 1015–1018). doi: 10.1109/ISBI.2014.6868045. [DOI] [Google Scholar]
  44. Ma J, Sheridan RP, Liaw A, Dahl GE, & Svetnik V (2015). Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling, 55(2), 263–274. doi: 10.1021/ci500747n. [DOI] [PubMed] [Google Scholar]
  45. Mason J, & Handscomb D (2002). Chebyshev polynomials. Boca Raton: CRC Press; https://books.google.com/books?id=8FHf0P3to0UC. [Google Scholar]
  46. McSherry F, & Mironov I (2009). Differentially private recommender systems In KDD’09, ACM. [Google Scholar]
  47. McSherry F, & Talwar K (2007a). Mechanism design via differential privacy. In 48th annual IEEE symposium on foundations of computer science (FOCS 2007), 20–23 October 2007, Providence, RI, USA, Proceedings (pp. 94–103). [Google Scholar]
  48. McSherry F, & Talwar K (2007b). Mechanism design via differential privacy In FOCS ‘07 (pp. 94–103). [Google Scholar]
  49. Miotto R, Li L, Kidd BA, & Dudley JT (2016). Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports, 6, 26094. doi: 10.1038/srep26094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Nissim K, Raskhodnikova S, & Smith A (2007). Smooth sensitivity and sampling in private data analysis. In Proceedings of the thirty-ninth annual ACM symposium on theory of computing (pp. 75–84), ACM. [Google Scholar]
  51. Ortiz A, Munilla J, Grriz JM, & Ramrez J (2016). Ensembles of deep learning architectures for the early diagnosis of the alzheimers disease. International Journal of Neural Systems, 26(07), 1650,025. doi: 10.1142/S0129065716500258. [DOI] [PubMed] [Google Scholar]
  52. Perotte A, Pivovarov R, Natarajan K, Weiskopf N, Wood F, & Elhadad N (2014). Diagnosis code assignment: models and evaluation metrics. Journal of the American Medical Informatics Association, 21(2), 231–237. doi: 10.1136/amiajnl-2013-002159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Perotte A, Ranganath R, Hirsch JS, Blei D, & Elhadad N (2015). Risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis. Journal of the American Medical Informatics Association, 22(4), 872–880. doi: 10.1093/jamia/ocv024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Phan N, Dou D, Piniewski B, & Kil D (2015a). Social restricted boltzmann machine: Human behavior prediction in health social networks In ASONAM’15 (pp. 424–431). [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Phan N, Dou D, Wang H, Kil D, & Piniewski B (2015b). Ontology-based deep learning for human behavior prediction in health social networks. In Proceedings of the 6th ACM conference on bioinformatics, computational biology and health informatics (pp. 433–442). doi: 10.1145/2808719.2808764. [DOI] [Google Scholar]
  56. Phan N, Dou D, Piniewski B, & Kil D (2016a). A deep learning approach for human behavior prediction with explanations in health social networks: social restricted boltzmann machine (SRBM+). Social Network Analysis and Mining, 6(1), 79:1–79:14. doi: 10.1007/s13278-016-0379-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Phan N, Ebrahimi J, Kil D, Piniewski B, & Dou D (2016b). Topic-aware physical activity propagation in a health social network. IEEE Intelligent Systems, 31(1), 5–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Phan N, Wang Y, Wu X, & Dou D (2016c). Differential privacy preservation for deep auto-encoders: An application of human behavior prediction In AAAI’16 (pp. 1309–1316). [Google Scholar]
  59. Plis SM, Hjelm DR, Salakhutdinov R, Allen EA, Bockholt HJ, Long JD, et al. (2014). Deep learning for neuroimaging: A validation study. Frontiers in Neuroscience, 8, 229. doi: 10.3389/fnins.2014.00229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Reed SE, Lee H, Anguelov D, Szegedy C, Erhan D, & Rabinovich A (2014). Training deep neural networks on noisy labels with bootstrapping. CoRR abs/1412.6596.
  61. Rivlin TJ (1990). Chebyshev polynomials form approximation theory to algebra and number theory (2nd ed). New York: Wiley. [Google Scholar]
  62. Roumia M, & Steinhubl S (2014). Improving cardiovascular outcomes using electronic health records. Current Cardiology Reports, 16(2), 451. doi: 10.1007/s11886-013-0451-6. [DOI] [PubMed] [Google Scholar]
  63. Rudin W (1976). Principles of mathematical analysis. New York: McGraw-Hill. [Google Scholar]
  64. Shokri R, & Shmatikov V (2015). Privacy-preserving deep learning In CCS’15 (pp. 1310–1321). [Google Scholar]
  65. Smolensky P (1986). Information processing in dynamical systems: Foundations of harmony theory In Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1, pp. 194–281). [Google Scholar]
  66. Song S, Chaudhuri K, & Sarwate AD (2013). Stochastic gradient descent with differentially private updates In GlobalSIP (pp. 245–248). [Google Scholar]
  67. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, & Salakhutdinov R (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–1958. http://jmlr.org/papers/v15/srivastava14a.html. [Google Scholar]
  68. U.S. Department of Health and Human Services. (2016a). Health information technology for economic and clinical health (hitech) act. https://www.hhs.gov/hipaa/for-professionals/special-topics/HITECH-act-enforcement-interim-final-rule/.
  69. U.S. Department of Health and Human Services. (2016b). Health insurance portability and accountability act of 1996. http://www.hhs.gov/hipaa/.
  70. Vlcek M (2012). Chebyshev polynomial approximation for activation sigmoid function. Neural Network World, 4, 387–393. [Google Scholar]
  71. Wang Y, Wu X, & Wu L (2013). Differential privacy preserving spectral graph analysis In PAKDD (2) (pp. 329–340). [Google Scholar]
  72. Wikipedia. (2016). Activation function. https://en.wikipedia.org/wiki/Activation_function.
  73. Wu J, Roy J, & Stewart WF (2010). Prediction modeling using EHR data: Challenges, strategies, and a comparison of machine learning approaches. Medical Care, 48(6 Suppl), S106–S113. doi: 10.1097/mlr.0b013e3181de9e17. [DOI] [PubMed] [Google Scholar]
  74. Xiao X, Wang G, & Gehrke J (2010). Differential privacy via wavelet transforms In ICDE’10 (pp. 225–236). [Google Scholar]
  75. Xiong HY, Alipanahi B, Lee LJ, Bretschneider H, Merico D, Yuen RKC, et al. (2015). The human splicing code reveals new insights into the genetic determinants of disease. Science, 347(6218), 1254806. doi: 10.1126/science.1254806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Zhang J, Zhang Z, Xiao X, Yang Y, & Winslett M (2012). Functional mechanism: Regression analysis under differential privacy. PVLDB, 5(11), 1364–1375. [Google Scholar]

RESOURCES