Abstract
Background
Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives.
Methods
This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov’s accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier.
Results
Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable.
Conclusions
We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.
Electronic supplementary material
The online version of this article (10.1186/s12920-018-0401-7) contains supplementary material, which is available to authorized users.
Keywords: Homomorphic encryption, Machine learning, Logistic regression
Background
Machine learning (ML) is a class of methods in artificial intelligence, the characteristic feature of which is that they do not give the solution of a particular problem but they learn the process of finding solutions to a set of similar problems. The theory of ML appeared in the early 60’s on the basis of the achievements of cybernetics [1] and gave the impetus to the development of theory and practice of technically complex learning systems [2]. The goal of ML is to partially or fully automate the solution of complicated tasks in various fields of human activity.
The scope of ML applications is constantly expanding; however, with the rise of ML, the security problem has become an important issue. For example, many medical decisions rely on logistic regression model, and biomedical data usually contain confidential information about individuals [3] which should be treated carefully. Therefore, privacy and security of data are the major concerns, especially when deploying the outsource analysis tools.
There have been several researches on secure computation based on cryptographic primitives. Nikolaenko et al. [4] presented a privacy preserving linear regression protocol on horizontally partitioned data using Yao’s garbled circuits [5]. Multi-party computation technique was also applied to privacy-preserving logistic regression [6–8]. However, this approach is vulnerable when a party behaves dishonestly, and the assumption for secret sharing is quite different from that of outsourcing computation.
Homomorphic encryption (HE) is a cryptosystem that allows us to perform certain arithmetic operations on encrypted data and receive an encrypted result that corresponds to the result of operations performed in plaintext. Several papers already discussed ML with HE techniques. Wu et al. [9] used Paillier cryptosystem [10] and approximated the logistic function using polynomials, but it required an exponentially growing computational cost in the degree of the approximation polynomial. Aono et al. [11] and Xie et al. [12] used an additive HE scheme to aggregate some intermediate statistics. However, the scenario of Aono et al. relies on the client to decrypt these intermediary statistics and the method of Xie et al. requires expensive computational cost to calculate the intermediate information. The most related research of this paper is the work of Kim et al. [13] which also used HE based ML. However, the size of encrypted data and learning time were highly dependent on the number of features, so the performance for a large dataset was not practical in terms of storage and computational cost.
Since 2011, the iDASH Privacy and Security Workshop has assembled specialists in privacy technology to discuss issues that apply to biomedical data sharing, as well as main stakeholders who provided an overview of the main uses of the data, different laws and regulations, and their own views on privacy. In addition, it has began to hold annual competitions on the basis of the workshop from 2014. The goal of this challenge is to evaluate the performance of state-of-the-arts methods that ensures rigorous data confidentiality during data analysis in a cloud environment.
In this paper, we provide a solution to the third track of iDASH 2017 competition, which aims to develop HE based secure solutions for building a ML model (i.e., logistic regression) on encrypted data. We propose a general practical solution for HE based ML that demonstrates good performance and low storage costs. In practice, our output quality is comparable to the one of an unencrypted learning case. As a basis, we use the HE scheme for approximate arithmetic [14]. To improve the performance, we apply several additional techniques including a packing method, which reduce the required storage space and optimize the computational time. We also adapt Nesterov’s accelerated gradient [15] to increase the speed of convergence. As a result, we could obtain a high-accuracy classifier using only a small number of iterations.
We give an open-source implementation [16] to demonstrate the performance of our HE based ML method. With our packing method we can encrypt the dataset with 1579 samples and 18 features using 39MB of memory. The encrypted learning time is about six minutes. We also demonstrate our implementation on the datasets used in [13] to compare the results. For example, the training of a logistic regression model took about 3.6 min with the storage about 0.02GB compared to 114 min and 0.69GB of Kim et al. [13] when a dataset consists of 1253 samples, each of which has 9 features.
Methods
Logistic regression
Logistic regression or logit model is a ML model used to predict the probability of occurrence of an event by fitting data to a logistic curve [17]. It is widely used in various fields including machine learning, biomedicine [18], genetics [19], and social sciences [20].
Throughout this paper, we treat the case of a binary dependent variable, represented by ± 1. Learning data consists of pairs (xi,yi) of a vector of co-variates and a dependent variable yi∈{±1}. Logistic regression aims to find an optimal which maximizes the likelihood estimator
or equivalently minimizes the loss function, defined as the negative log-likelihood:
where zi=yi·(1,xi) for i=1,…,n.
Gradient descent
Gradient Descent (GD) is a method for finding a local extremum (minimum or maximum) of a function by moving along gradients. To minimize the function in the direction of the gradient, one-dimensional optimization methods are used.
For logistic regression, the gradient of the cost function with respect to β is computed by
where . Starting from an initial β0, the gradient descent method at each step t updates the regression parameters using the equation
where αt is a learning rate at step t.
Nesterov’s accelerated gradient
The method of GD can face a problem of zig-zagging along a local optima and this behavior of the method becomes typical if it increases the number of variables of an objective function. Many GD optimization algorithms are widely used to overcome this phenomenon. Momentum method, for example, dampens oscillation using the accumulated exponential moving average for the gradient of the loss function.
Nesterov’s accelerated gradient [15] is a slightly different variant of the momentum update. It uses moving average on the update vector and evaluates the gradient at this “looked-ahead” position. It guarantees a better rate of convergence O(1/t2) (vs. O(1/t) of standard GD algorithm) after t steps theoretically, and consistently works slightly better in practice. Starting with a random initial v0=β0, the updated equations for Nesterov’s Accelerated GD are as follows:
1 |
where 0<γt<1 is a moving average smoothing parameter.
Approximate homomorphic encryption
HE is a cryptographic scheme that allows us to carry out operations on encrypted data without decryption. Cheon et al. [14] presented a method to construct a HE scheme for arithmetic of approximate numbers (called HEAAN in what follows). The main idea is to treat an encryption noise as part of error occurring during approximate computations. That is, an encryption ct of message by a secret key sk for a ciphertext modulus q will have a decryption structure of the form 〈ct,sk〉=m+e (mod q) for some small e.
The following is a simple description of HEAAN based on the ring learning with errors problem. For a power-of-two integer N, the cyclotomic polynomial ring of dimension N is denoted by . For a positive integer ℓ, we denote the residue ring of modulo 2ℓ.
- KeyGen(1λ).
- For an integer L that corresponds to the largest ciphertext modulus level, given the security parameter λ, output the ring dimension N which is a power of two.
- Set the small distributions χkey,χerr,χenc over for secret, error, and encryption, respectively.
- Sample a secret s←χkey, a random and an error e←χerr. Set the secret key as sk←(1,s) and the public key as where b←−as+e (mod 2L).
- KSGensk(s′). For , sample a random and an error e′←χerr. Output the switching key as where b′←−a′s+e′+2Ls′(mod 22·L).
- Set the evaluation key as evk←KSGensk(s2).
Encpk(m). For , sample v←χenc and e0,e1←χerr. Output v·pk+(m+e0,e1) (mod 2L).
Decsk(ct). For , output c0+c1·s (mod 2ℓ).
Add(ct1,ct2). For , output ctadd←ct1+ct2 (mod 2ℓ).
CMultevk(ct;c). For and , output ct′←c·ct (mod 2ℓ).
Multevk(ct1,ct2). For , let (d0,d1,d2)=(b1b2,a1b2+a2b1,a1a2) (mod 2ℓ). Output ctmult←(d0,d1)+⌊2−L·d2·evk⌉ (mod 2ℓ).
ReScale(ct;p). For a ciphertext and an integer p, output ct′←⌊2−p·ct⌉ (mod 2ℓ−p).
For a power-of-two integer k≤N/2, HEAAN provides a technique to pack k complex numbers in a single polynomial using a variant of the complex canonical embedding map . We restrict the plaintext space as a vector of real numbers throughout this paper. Moreover, we multiply a scale factor of 2p to plaintexts before the rounding operation to maintain their precision.
Encode(w;p). For , output the polynomial .
Decode(m;p). For a plaintext , the encoding of an array consisting of a power of two k≤N/2 messages, output the vector .
The encoding/decoding techniques support the parallel computation over encryption, yielding a better amortized timing. In addition, the HEAAN scheme provides the rotation operation on plaintext slots, i.e., it enables us to securely obtain an encryption of the shifted plaintext vector (wr,…,wk−1,w0,…,wr−1) from an encryption of (w0,…,wk−1). It is necessary to generate an additional public information rk, called the rotation key. We denote the rotation operation as follows.
Rotaterk(ct;r). For the rotation keys rk, output a ciphertext ct′ encrypting the rotated plaintext vector of ct by r positions.
Refer [14] for the technical details and noise analysis.
Database encoding
For an efficient computation, it is crucial to find a good encoding method for the given database. The HEAAN scheme supports the encryption of a plaintext vector and the slot-wise operations over encryption. However, our learning data is represented by a matrix (zij)1≤i≤n,0≤j≤f. A recent work [13] used the column-wise approach, i.e., a vector of specific feature data (zij)1≤i≤n is encrypted in a single ciphertext. Consequently, this method required (f+1) number of ciphertexts to encrypt the whole dataset.
In this subsection, we suggest a more efficient encoding method to encrypt a matrix in a single ciphertext. A training dataset consists of n samples for 1≤i≤n, which can be represented as a matrix Z as follows:
For simplicity, we assume that n and (f+1) are power-of-two integers satisfying logn+ log(f+1)≤ log(N/2). Then we can pack the whole matrix in a single ciphertext in a row-by-row manner. Specifically, we will identify this matrix with the k-dimensional vector by (zij)1≤i≤n,0≤j≤f↦w=(wℓ)0≤ℓ<n·(f+1) where wℓ=zij such that ℓ=(f+1)(i−1)+j, that is,
In a general case, we can pad zeros to set the number of samples and the dimension of a weight vector as powers of two.
It is necessary to perform shifting operations of row and column vectors for the evaluation of the GD algorithm. In the rest of this subsection, we explain how to perform these operations using the rotation algorithm provided in the HEAAN scheme. As described above, the algorithm Rotate(ct;r) can shift the encrypted vector by r positions. In particular, this operation is useful in our implementation when r=f+1 or r=1. For the first case, a given matrix Z=(zij)1≤i≤n,0≤j≤f is converted into the matrix
while the latter case outputs the matrix
over encryption. The matrix Z′ is obtained from Z by shifting its row vectors and Z′′ can be viewed as an incomplete column shifting because of its last column.
Polynomial approximation of the sigmoid function
One limitation of the existing HE cryptosystems is that they only support polynomial arithmetic operations. The evaluation of the sigmoid function is an obstacle for the implementation of the logistic regression since it cannot be expressed as a polynomial.
Kim et al. [13] used the least squares approach to find a global polynomial approximation of the sigmoid function. We adapt this approximation method and consider the degree 3, 5, and 7 least squares polynomials of the sigmoid function over the domain [−8,8]. We observed that the inner product values in our experimentations belong to this interval. For simplicity, a least squares polynomial of σ(−x) will be denoted by g(x) so that we have when . The approximate polynomials g(x) of degree 3, 5, and 7 are computed as follows:
A low-degree polynomial requires a smaller evaluation depth while a high-degree polynomial has a better precision. The maximum errors between σ(−x) and the least squares g3(x), g5(x), and g7(x) are approximately 0.114, 0.061 and 0.032, respectively.
Homomorphic evaluation of the gradient descent
This section explains how to securely train the logistic regression model using the HEAAN scheme. To be precise, we explicitly describe a full pipeline of the evaluation of the GD algorithm. We adapt the same assumptions as in the previous section so that the whole database can be encrypted in a single ciphertext.
First of all, a client encrypts the dataset and the initial (random) weight vector β(0) and sends them to the public cloud. The dataset is encoded to a matrix Z of size n×(f+1) and the weight vector is copied n times to fill the plaintext slots. The plaintext matrices of the resulting ciphertexts are described as follows:
As mentioned before, both Z and β(0) are scaled by a factor of 2p before encryption to maintain the precision of plaintexts. We skip to mention the scaling factor in the rest of this section since every step will return a ciphertext with the scaling factor of 2p.
The public server takes two ciphertexts ctz and and evaluates the GD algorithm to find an optimal modeling vector. The goal of each iteration is to update the modeling vector β(t) using the gradient of loss function:
where αt denotes the learning rate at the t-th iteration. Each iteration consists of the following eight steps.
Step 1: For given two ciphertexts ctz and , compute their multiplication and rescale it by p bits:
The output ciphertext contains the values in its plaintext slots, i.e.,
Step 2: To obtain the inner product , the public cloud aggregates the values of in the same row. This step can be done by adapting the incomplete column shifting operation.
One simple way is to repeat this operation (f+1) times, but the computational cost can be reduced down to log(f+1) by adding ct1 to its rotations recursively:
for j=0,1,…, log(f+1)−1. Then the output ciphertext ct2 encrypts the inner product values in the first column and some “garbage” values in the other columns, denoted by ⋆, i.e.,
Step 3: This step performs a constant multiplication in order to annihilate the garbage values. It can be obtained by computing the encoding polynomial c←Encode(C;pc) of the matrix
using the scaling factor of for some integer pc. The parameter pc is chosen as the bit precision of plaintexts so it can be smaller than the parameter p.
Finally we multiply the polynomial c to the ciphertext ct2 and rescale it by pc bits:
The garbage values are multiplied with zero while one can maintain the inner products in the plaintext slots. Hence the output ciphertext ct3 encrypts the inner product values in the first column and zeros in the others:
Step 4: The goal of this step is to replicate the inner product values to other columns. Similar to Step 2, it can be done by adding the input ciphertext to its column shifting recursively, but in the opposite direction
for j=0,1,…, log(f+1)−1. The output ciphertext ct4 has the same inner product value in each row:
Step 5: This step simply evaluates an approximating polynomial of the sigmoid function, i.e., ct5←g(ct4) for some g∈{g3,g5,g7}. The output ciphertext encrypts the values of in its plaintext slots:
Step 6: The public cloud multiplies the ciphertext ct5 with the encrypted dataset ctz and rescales the resulting ciphertext by p bits:
The output ciphertext encrypts the n vectors in each row:
Step 7: This step aggregates the vectors to compute the gradient of the loss function. It is obtained by recursively adding ct6 to its row shifting:
for j= log(f+1),…, log(f+1)+ logn−1. The output ciphertext is
as desired.
Step 8: For the learning rate αt, it uses the parameter pc to compute the scaled learning rate . The public cloud updates β(t) using the ciphertext ct7 and the constant Δ(t):
Finally it returns a ciphertext encrypting the updated modeling vector
where .
Homomorphic evaluation of Nesterov’s accelerated gradient
The performance of leveled HE schemes highly depends on the depth of a circuit to be evaluated. The bottleneck of homomorphic evaluation of the GD algorithm is that we need to repeat the update of weight vector β(t) iteratively. Consequently, the total depth grows linearly on the number of iterations and it should be minimized for practical implementation.
For the homomorphic evaluation of Nesterov’s accelerated gradient, a clients sends one more ciphertext encrypting the initial vector v(0) to the public cloud. Then the server uses an encryption ctz of dataset Z to update two ciphertexts v(t) and at each iteration. One can securely compute β(t+1) in the same way as the previous section. Nesterov’s accelerated gradient requires one more step to compute the second equation of (1) and obtain an encryption of v(t+1) from and .
Step 9: Let and let . It obtains the ciphertext by computing
Then the output ciphertext is
which encrypts in the plaintext slots.
Results
In this section, we present parameter sets with experimental results. Our implementation is based on the HEAAN library [21] that implements the approximate HE scheme of Cheon et al. [14]. The source code is publicly available at github [16].
Parameters settings
We explain how to choose the parameter sets for the homomorphic evaluation of the (Nesterov’s) GD algorithm with security analysis. We start with the parameter L - the bitsize of a fresh ciphertext modulus. The modulus of a ciphertext is reduced after the ReScale operations and the evaluation of an approximate polynomial g(x).
The ReScale procedures after homomorphic multiplications (step 1 and 6) reduce the ciphertext modulus by p bits while the ReScale procedures after constant multiplications (step 3 and 8) require pc bits of modulus reduction. Note that the ciphertext modulus remains the same for the step 9 for the Nesterov’s accelerated gradient if we compute step 8 and 9 together using some precomputed constants. We use a similar method with a previous work for the evaluation of the sigmoid function (see [13] for details); the ciphertext modulus is reduced by (2p+3) bits for the evaluation of g3(x), and (3p+3) bits for that of g5(x) and g7(x). Therefore, we obtain the following lower bound on the parameter L:
where ITERNUM is the number of iterations of the GD algorithm and L0 denotes the bit size of the output ciphertext modulus. The modulus of the output ciphertext should be larger than 2p in order to encrypt the resulting weight vector and maintain its precision. We take p=30, pc=20 and L0=35 in our implementation.
The dimension of a cyclotomic ring is chosen as N=216 following the security estimator of Albrecht et al. [22] for the learning with errors problem. In this case, the bit size L of a fresh ciphertext modulus should be bounded by 1284 to ensure the security level λ=80 against known attacks. Hence we repeat ITERNUM=9 iterations of GD algorithm g=g3, and ITERNUM=7 iterations when g=g5 or g=g7.
The smoothing parameter γt is chosen in accordance with [15]. The choice of proper GD learning rate parameter αt normally depends on the problem at hand. Choosing too small αt leads to a slow convergence, and choosing too large αt could lead to a divergence, or a fluctuation near a local optima. It is often optimized by a trial and error method, which we are not available to perform. Under these conditions harmonic progression seems to be a good candidate and we choose a learning rate in our implementation.
Implementation
All the experimentations were performed on a machine with an Intel Xeon CPU E5-2620 v4 at 2.10 GHz processor.
Task for the iDASH challenge. In genomic data privacy and security protection competition 2017, the goal of Track 3 was to devise a weight vector to predict the disease using the genotype and phenotype data (Additional file 1: iDASH). This dataset consists of 1579 samples, each of which has 18 features and a cohort information (disease vs. healthy). Since we use the ring dimension N=216, we can only pack up to N/2=215 dataset values in a single ciphertext but we have totally 1579×19>215 values to be packed. We can overcome this issue by dividing the dataset into two parts of sizes 1579×16 and 1579×3 and encoding them separately into two ciphertexts. In general, this method can be applied to the datasets with any number of features: the dataset can be encrypted into ⌈(f+1)·n·(N/2)−1⌉ ciphertexts.
In order to estimate the validity of our method, we utilized 10-fold cross-validation (CV) technique: it randomly partitions the dataset into ten folds with approximately equal sizes, and uses every subset of 9 folds for training and the rest one for testing the model. The performance of our solution including the average running time per fold of 10-fold CV (encryption and evaluation) and the storage (encrypted dataset) are shown in Table 1. This table also provides the average accuracy and the AUC (Area Under the Receiver Operating Characteristic Curve) which estimate the quality of a binary classifier.
Table 1.
Sample | Feature | degg | Iter | Enc | Learn | Storage | Accuracy | AUC |
---|---|---|---|---|---|---|---|---|
num | num | num | time | time | ||||
1579 | 18 | 3 | 9 | 4s | 7.94 min | 0.04 GB | 61.72% | 0.677 |
5 | 7 | 4s | 6.07 min | 0.04 GB | 62.87% | 0.689 | ||
7 | 7 | 4s | 7.01 min | 0.04 GB | 62.36% | 0.689 |
Comparison We present some experimental results to compare the performance of implementation to [13]. For a fair comparison, we use the same 5-fold CV technique on five datasets - the Myocardial Infarction dataset from Edinburgh [23] (Additional file 2: Edinburgh), Low Birth Weight Study (Additional file 3: lbw), Nhanes III (Additional file 4: nhanes3), Prostate Cancer Study (Additional file 5: pcs), and Umaru Impact Study datasets (Additional file 6: uis) [24–27]. All datasets have a single binary outcome variable.
All the experimental results are summarized in Table 2. Our new packing method could reduce the storage of ciphertexts and the use of Nesterov’s accelerated gradient achieves much higher speed than the approach of [13]. For example, it took 3.6 min to train a logistic regression model using the encrypted Edinburgh dataset of size 0.02 GB, compared to 114 min and 0.69 GB of the previous work [13], while achieving the good qualities of the output models.
Table 2.
Dataset | Sample | Feature | Method | degg | Iter | Enc | Learn | Storage | Accuracy | AUC |
---|---|---|---|---|---|---|---|---|---|---|
num | num | num | time | time | ||||||
Edinburgh | 1253 | 9 | Ours | 5 | 7 | 2s | 3.6 min | 0.02 GB | 91.04% | 0.958 |
[13] | 3 | 25 | 12s | 114 min | 0.69 GB | 86.03% | 0.956 | |||
[13] | 7 | 20 | 12s | 114 min | 0.71 GB | 86.19% | 0.954 | |||
lbw | 189 | 9 | Ours | 5 | 7 | 2s | 3.3 min | 0.02 GB | 69.19% | 0.689 |
[13] | 3 | 25 | 11s | 99 min | 0.67 GB | 69.30% | 0.665 | |||
[13] | 7 | 20 | 11s | 86 min | 0.70 GB | 69.29% | 0.678 | |||
nhanes3 | 15649 | 15 | Ours | 5 | 7 | 14s | 7.3 min | 0.16 GB | 79.22% | 0.717 |
[13] | 3 | 25 | 21s | 235 min | 1.15 GB | 79.23% | 0.732 | |||
[13] | 7 | 20 | 21s | 208 min | 1.17 GB | 79.23% | 0.737 | |||
pcs | 379 | 9 | Ours | 5 | 7 | 2s | 3.5 min | 0.02 GB | 68.27% | 0.740 |
[13] | 3 | 25 | 11s | 103 min | 0.68 GB | 68.85% | 0.742 | |||
[13] | 7 | 20 | 11s | 97 min | 0.70 GB | 69.12% | 0.750 | |||
uis | 575 | 8 | Ours | 5 | 7 | 2s | 3.5 min | 0.02 GB | 74.44% | 0.603 |
[13] | 3 | 25 | 10s | 104 min | 0.61 GB | 74.43% | 0.585 | |||
[13] | 7 | 20 | 10s | 96 min | 0.63 GB | 75.43% | 0.617 |
Discussion
The rapid growth of computing power initiated the study of more complicated ML algorithms in various fields including biomedical data analysis [28, 29]. HE system is a promising solution for the privacy issue, but its efficiency in real applications remains as an open question. It would be great if we could extend this work to other ML algorithms such as deep learning.
One constraint in our approach is that the number of iterations of GD algorithm is limited depending on the choice of HE parameter. In terms of asymptotic complexity, applying the bootstrapping method of approximate HE scheme [30] to the GD algorithm would achieve a linear computation cost on the iteration number.
Conclusion
In the paper, we presented a solution to homomorphically evaluate the learning phase of logistic regression model using the gradient descent algorithm and the approximate HE scheme. Our solution demonstrates a good performance and the quality of learning is comparable to the one of an unencrypted case. Our encoding method can be easily extended to a large-scale dataset, which shows the practical potential of our approach.
Additional files
Acknowledgements
The authors would like to thank the editor and reviewers for the thoughtful comments and constructive suggestions, which greatly helped us improve the quality of this manuscript. The authors also thank Jinhyuck Jeong for giving valuable comments to the technical part of the manuscript.
Funding
This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.B0717-16-0098) and by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No.2017R1A5A1015626).
MK was supported in part by NIH grants U01TR002062 and U01EB023685. Publication of this article has been funded by the NRF Grant funded by the Korean Government (MSIT) (No.2017R1A5A1015626).
Availability of data and materials
All datasets are available in the Additional files provided with the publication. The HEAAN library is available at https://github.com/kimandrik/HEAAN. Our implementation is available at https://github.com/kimandrik/HEML.
About this supplement
This article has been published as part of BMC Medical Genomics Volume 11 Supplement 4, 2018: Proceedings of the 6th iDASH Privacy and Security Workshop 2017. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-11-supplement-4.
Abbreviations
- AUC
Area under the receiver operating characteristic curve
- CV
Cross validation
- GD
Gradient descent
- HE
Homomorphic encryption
- ML
Machine learning
Authors’ contributions
JHC designed and supervised the study. KL analyzed the data. AK drafted the source code and MK optimized it. AK and MK performed the experiments. AK and YS are major contributors in writing the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
All authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Yongsoo Song, Email: yongsoosong@ucsd.edu.
Miran Kim, Email: mrkim@ucsd.edu.
Keewoo Lee, Email: activecondor@snu.ac.kr.
Jung Hee Cheon, Email: jhcheon@snu.ac.kr.
References
- 1.Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210–29. doi: 10.1147/rd.33.0210. [DOI] [Google Scholar]
- 2.Dietz E. Application of logistic regression and logistic discrimination in medical decision making. Biom J. 1987;29(6):747–51. doi: 10.1002/bimj.4710290614. [DOI] [Google Scholar]
- 3.Rousseau D. Biomedical Research: Changing the Common Rule by David Rousseau – Ammon & Rousseau Translations. 2017. https://www.ammon-rousseau.com/changing-the-rules-by-david-rousseau/ [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spHgiYRI.
- 4.Nikolaenko V, Weinsberg U, Ioannidis S, Joye M, Boneh D, Taft N. Privacy-preserving ridge regression on hundreds of millions of records. In: Security and Privacy (SP), 2013 IEEE Symposium On. IEEE: 2013. p. 334–48.
- 5.Yao AC-C. How to generate and exchange secrets. In: Foundations of Computer Science, 1986., 27th Annual Symposium On. IEEE: 1986. p. 162–7.
- 6.El Emam K, Samet S, Arbuckle L, Tamblyn R, Earle C, Kantarcioglu M. A secure distributed logistic regression protocol for the detection of rare adverse drug events. J Am Med Inform Assoc. 2012;20(3):453–61. doi: 10.1136/amiajnl-2011-000735. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Nardi Y, Fienberg SE, Hall RJ. Achieving both valid and secure logistic regression analysis on aggregated data from different private sources. J Priv Confidentiality. 2012;4(1):9. [Google Scholar]
- 8.Mohassel P, Zhang Y. SecureML: A System for Scalable Privacy-Preserving Machine Learning. IEEE Symp Secur Priv. 2017.
- 9.Wu S KH, Teruya T, Kawamoto J, Sakuma J. Privacy-preservation for stochastic gradient descent application to secure logistic regression. 27th Annu Conf Japan Soc Artif Intell. 2013;1–4.
- 10.Paillier P. Public-key cryptosystems based on composite degree residuosity classes. In: International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 1999. p. 223–38.
- 11.Aono Y, Hayashi T, Trieu Phong L, Wang L. Scalable and secure logistic regression via homomorphic encryption. In: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. ACM: 2016. p. 142–4.
- 12.Xie W, Wang Y, Boker SM, Brown DE. Privlogit: Efficient privacy-preserving logistic regression by tailoring numerical optimizers. arXiv preprint arXiv:1611.01170. 2016.
- 13.Kim Miran, Song Yongsoo, Wang Shuang, Xia Yuhou, Jiang Xiaoqian. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation. JMIR Medical Informatics. 2018;6(2):e19. doi: 10.2196/medinform.8805. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Cheon JH, Kim A, Kim M, Song Y. Homomorphic encryption for arithmetic of approximate numbers. In: Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Application of Cryptology and Information Security. Springer: 2017. p. 409–37.
- 15.Nesterov Y. A method of solving a convex programming problem with convergence rate o (1/k2). In: Soviet Mathematics Doklady, vol. 27: 1983. p. 372–6.
- 16.Cheon JH, Kim A, Kim M, Lee K, Song Y. Implementation for iDASH Competition 2017. 2017. https://github.com/kimandrik/HEML [Accessed 11 July 2018] Available from: http://www.webcitation.org/70qbe6xii.
- 17.Harrell FE. Ordinal logistic regression. In: Regression Modeling Strategies. Springer: 2001. p. 331–43.
- 18.Lowrie EG, Lew NL. Death risk in hemodialysis patients: the predictive value of commonly measured variables and an evaluation of death rate differences between facilities. Am J Kidney Dis. 1990;15(5):458–82. doi: 10.1016/S0272-6386(12)70364-5. [DOI] [PubMed] [Google Scholar]
- 19.Lewis CM, Knight J. Introduction to genetic association studies. Cold Spring Harb Protocol. 2012;2012(3):068163. doi: 10.1101/pdb.top068163. [DOI] [PubMed] [Google Scholar]
- 20.Gayle V, Lambert PS. Logistic regression models in sociological research. 2009.
- 21.Cheon JH, Kim A, Kim M, Song Y. Implementation of HEAAN. 2016. https://github.com/kimandrik/HEAAN [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spMzVJ6U.
- 22.Albrecht MR, Player R, Scott S. On the concrete hardness of learning with errors. J Math Cryptol. 2015;9(3):169–203. doi: 10.1515/jmc-2015-0016. [DOI] [Google Scholar]
- 23.Kennedy R, Fraser H, McStay L, Harrison R. Early diagnosis of acute myocardial infarction using clinical and electrocardiographic data at presentation: derivation and evaluation of logistic regression models. Eur Heart J. 1996;17(8):1181–91. doi: 10.1093/oxfordjournals.eurheartj.a015035. [DOI] [PubMed] [Google Scholar]
- 24.lbw: Low Birth Weight study data. 2017. https://rdrr.io/rforge/LogisticDx/man/lbw.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNFX2b5.
- 25.nhanes, 3: NHANES III data. 2017. https://rdrr.io/rforge/LogisticDx/man/nhanes3.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNJJFDx.
- 26.pcs: Prostate Cancer Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/pcs.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNLXr5a.
- 27.uis: UMARU IMPACT Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/uis.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNOLB9n.
- 28.Wang Y. Application of deep learning to biomedical informatics. Int J Appl Sci Res Rev. 2016.
- 29.Ravì D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, Yang G-Z. Deep learning for health informatics. IEEE J Biomed Health Inform. 2017;21(1):4–21. doi: 10.1109/JBHI.2016.2636665. [DOI] [PubMed] [Google Scholar]
- 30.Cheon JH, Han K, Kim A, Kim M, Song Y. Bootstrapping for approximate homomorphic encryption. In: Advances in Cryptology–EUROCRYPT 2018: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 2018. p. 360–84.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All datasets are available in the Additional files provided with the publication. The HEAAN library is available at https://github.com/kimandrik/HEAAN. Our implementation is available at https://github.com/kimandrik/HEML.