Skip to main content

Logistic regression model training based on the approximate homomorphic encryption

Abstract

Background

Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives.

Methods

This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov’s accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier.

Results

Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable.

Conclusions

We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.

Background

Machine learning (ML) is a class of methods in artificial intelligence, the characteristic feature of which is that they do not give the solution of a particular problem but they learn the process of finding solutions to a set of similar problems. The theory of ML appeared in the early 60’s on the basis of the achievements of cybernetics [1] and gave the impetus to the development of theory and practice of technically complex learning systems [2]. The goal of ML is to partially or fully automate the solution of complicated tasks in various fields of human activity.

The scope of ML applications is constantly expanding; however, with the rise of ML, the security problem has become an important issue. For example, many medical decisions rely on logistic regression model, and biomedical data usually contain confidential information about individuals [3] which should be treated carefully. Therefore, privacy and security of data are the major concerns, especially when deploying the outsource analysis tools.

There have been several researches on secure computation based on cryptographic primitives. Nikolaenko et al. [4] presented a privacy preserving linear regression protocol on horizontally partitioned data using Yao’s garbled circuits [5]. Multi-party computation technique was also applied to privacy-preserving logistic regression [68]. However, this approach is vulnerable when a party behaves dishonestly, and the assumption for secret sharing is quite different from that of outsourcing computation.

Homomorphic encryption (HE) is a cryptosystem that allows us to perform certain arithmetic operations on encrypted data and receive an encrypted result that corresponds to the result of operations performed in plaintext. Several papers already discussed ML with HE techniques. Wu et al. [9] used Paillier cryptosystem [10] and approximated the logistic function using polynomials, but it required an exponentially growing computational cost in the degree of the approximation polynomial. Aono et al. [11] and Xie et al. [12] used an additive HE scheme to aggregate some intermediate statistics. However, the scenario of Aono et al. relies on the client to decrypt these intermediary statistics and the method of Xie et al. requires expensive computational cost to calculate the intermediate information. The most related research of this paper is the work of Kim et al. [13] which also used HE based ML. However, the size of encrypted data and learning time were highly dependent on the number of features, so the performance for a large dataset was not practical in terms of storage and computational cost.

Since 2011, the iDASH Privacy and Security Workshop has assembled specialists in privacy technology to discuss issues that apply to biomedical data sharing, as well as main stakeholders who provided an overview of the main uses of the data, different laws and regulations, and their own views on privacy. In addition, it has began to hold annual competitions on the basis of the workshop from 2014. The goal of this challenge is to evaluate the performance of state-of-the-arts methods that ensures rigorous data confidentiality during data analysis in a cloud environment.

In this paper, we provide a solution to the third track of iDASH 2017 competition, which aims to develop HE based secure solutions for building a ML model (i.e., logistic regression) on encrypted data. We propose a general practical solution for HE based ML that demonstrates good performance and low storage costs. In practice, our output quality is comparable to the one of an unencrypted learning case. As a basis, we use the HE scheme for approximate arithmetic [14]. To improve the performance, we apply several additional techniques including a packing method, which reduce the required storage space and optimize the computational time. We also adapt Nesterov’s accelerated gradient [15] to increase the speed of convergence. As a result, we could obtain a high-accuracy classifier using only a small number of iterations.

We give an open-source implementation [16] to demonstrate the performance of our HE based ML method. With our packing method we can encrypt the dataset with 1579 samples and 18 features using 39MB of memory. The encrypted learning time is about six minutes. We also demonstrate our implementation on the datasets used in [13] to compare the results. For example, the training of a logistic regression model took about 3.6 min with the storage about 0.02GB compared to 114 min and 0.69GB of Kim et al. [13] when a dataset consists of 1253 samples, each of which has 9 features.

Methods

Logistic regression

Logistic regression or logit model is a ML model used to predict the probability of occurrence of an event by fitting data to a logistic curve [17]. It is widely used in various fields including machine learning, biomedicine [18], genetics [19], and social sciences [20].

Throughout this paper, we treat the case of a binary dependent variable, represented by ± 1. Learning data consists of pairs (xi,yi) of a vector of co-variates \({\mathbf {x}}_{i} = (x_{i1},..., x_{if}) \in {\mathbb {R}}^{f}\) and a dependent variable yi{±1}. Logistic regression aims to find an optimal \({\boldsymbol {\beta }} \in {\mathbb {R}}^{f+1}\) which maximizes the likelihood estimator

$$\prod_{i=1}^{n} \Pr(y_{i}|{\mathbf{x}}_{i}) = \prod_{i=1}^{n}\frac{1}{1 + \exp(-y_{i}(1,{\mathbf{x}}_{i})^{T} {\boldsymbol{\beta}})},$$

or equivalently minimizes the loss function, defined as the negative log-likelihood:

$$J({\boldsymbol{\beta}}) = \frac{1}{n}\sum_{i=1}^{n} \log\left(1 + \exp\left(-\mathbf{z}_{i}^{T} {\boldsymbol{\beta}}\right)\right)$$

where zi=yi·(1,xi) for i=1,…,n.

Gradient descent

Gradient Descent (GD) is a method for finding a local extremum (minimum or maximum) of a function by moving along gradients. To minimize the function in the direction of the gradient, one-dimensional optimization methods are used.

For logistic regression, the gradient of the cost function with respect to β is computed by

$$\nabla J({\boldsymbol{\beta}}) = -\frac{1}{n}\sum_{i=1}^{n} \sigma\left(-\mathbf{z}_{i}^{T} {\boldsymbol{\beta}}\right) \cdot \mathbf{z}_{i}$$

where \(\sigma (x) = \frac {1}{1+\exp (-x)}\). Starting from an initial β0, the gradient descent method at each step t updates the regression parameters using the equation

$${\boldsymbol{\beta}}^{(t+1)} \leftarrow {\boldsymbol{\beta}}^{(t)} + \frac{\alpha_{t}}{n}\sum_{i=1}^{n} \sigma\left(-\mathbf{z}_{i}^{T} {\boldsymbol{\beta}}^{(t)}\right) \cdot \mathbf{z}_{i} $$

where αt is a learning rate at step t.

Nesterov’s accelerated gradient

The method of GD can face a problem of zig-zagging along a local optima and this behavior of the method becomes typical if it increases the number of variables of an objective function. Many GD optimization algorithms are widely used to overcome this phenomenon. Momentum method, for example, dampens oscillation using the accumulated exponential moving average for the gradient of the loss function.

Nesterov’s accelerated gradient [15] is a slightly different variant of the momentum update. It uses moving average on the update vector and evaluates the gradient at this “looked-ahead” position. It guarantees a better rate of convergence O(1/t2) (vs. O(1/t) of standard GD algorithm) after t steps theoretically, and consistently works slightly better in practice. Starting with a random initial v0=β0, the updated equations for Nesterov’s Accelerated GD are as follows:

$$\begin{array}{*{20}l} \left\{\begin{array}{ll} {\boldsymbol{\beta}}^{(t+1)} & = {\mathbf{v}}^{(t)} - \alpha_{t} \cdot \bigtriangledown J\left({\mathbf{v}}^{(t)}\right), \\ {\mathbf{v}}^{(t+1)} & = (1-\gamma_{t})\cdot{\boldsymbol{\beta}}^{(t + 1)} + \gamma_{t}\cdot{\boldsymbol{\beta}}^{(t)}, \end{array}\right. \end{array} $$
(1)

where 0<γt<1 is a moving average smoothing parameter.

Approximate homomorphic encryption

HE is a cryptographic scheme that allows us to carry out operations on encrypted data without decryption. Cheon et al. [14] presented a method to construct a HE scheme for arithmetic of approximate numbers (called HEAAN in what follows). The main idea is to treat an encryption noise as part of error occurring during approximate computations. That is, an encryption ct of message \(m \in {\mathcal {R}}\) by a secret key sk for a ciphertext modulus q will have a decryption structure of the form 〈ct,sk〉=m+e (mod q) for some small e.

The following is a simple description of HEAAN based on the ring learning with errors problem. For a power-of-two integer N, the cyclotomic polynomial ring of dimension N is denoted by \({\mathcal R} ={\mathbb {Z}} /\left (X^{N} + 1\right)\). For a positive integer , we denote \({\mathcal {R}}_{\ell } = {\mathcal {R}} / 2^{\ell } {\mathcal {R}} = {\mathbb {Z}}_{2^{\ell }}[\!X] /\left (X^{N} + 1\right)\) the residue ring of \({\mathcal {R}}\) modulo 2.

  • KeyGen(1λ).

    • For an integer L that corresponds to the largest ciphertext modulus level, given the security parameter λ, output the ring dimension N which is a power of two.

    • Set the small distributions χkey,χerr,χenc over \({\mathcal R}\) for secret, error, and encryption, respectively.

    • Sample a secret sχkey, a random \(a\leftarrow {\mathcal R}_{L}\) and an error eχerr. Set the secret key as sk←(1,s) and the public key as \(\mathsf {pk}\leftarrow (b,a)\in {\mathcal R}_{L}^{2}\) where b←−as+e (mod 2L).

  • KSGensk(s). For \(s'\in {\mathcal R}\), sample a random \(a^{\prime }\leftarrow {\mathcal R}_{2 \cdot L}\) and an error eχerr. Output the switching key as \(\mathsf {swk}\leftarrow (b^{\prime },a^{\prime })\in {\mathcal R}_{2\cdot L}^{2}\) where b←−as+e+2Ls(mod 2L).

    • Set the evaluation key as evk←KSGensk(s2).

  • Encpk(m). For \(m\in {\mathcal R}\), sample vχenc and e0,e1χerr. Output v·pk+(m+e0,e1) (mod 2L).

  • Decsk(ct). For \(\mathsf {ct}= (c_{0},c_{1})\in {\mathcal R}_{\ell }^{2}\), output c0+c1·s (mod 2).

  • Add(ct1,ct2). For \(\mathsf {ct}_{1},\mathsf {ct}_{2}\in {\mathcal R}_{\ell }^{2}\), output ctaddct1+ct2 (mod 2).

  • CMultevk(ct;c). For \(\mathsf {ct}\in {\mathcal R}_{\ell }^{2}\) and \(a\in {\mathcal R}\), output ctc·ct (mod 2).

  • Multevk(ct1,ct2). For \(\mathsf {ct}_{1}=(b_{1},a_{1}),\mathsf {ct}_{2}=(b_{2},a_{2})\in {\mathcal R}_{\ell }^{2}\), let (d0,d1,d2)=(b1b2,a1b2+a2b1,a1a2) (mod 2). Output ctmult←(d0,d1)+2L·d2·evk (mod 2).

  • ReScale(ct;p). For a ciphertext \(\mathsf {ct}\in {\mathcal R}_{\ell }^{2}\) and an integer p, output ct2p·ct (mod 2p).

For a power-of-two integer kN/2, HEAAN provides a technique to pack k complex numbers in a single polynomial using a variant of the complex canonical embedding map \({\phi :{\mathbb {C}}^{k} \rightarrow {\mathcal {R}}}\). We restrict the plaintext space as a vector of real numbers throughout this paper. Moreover, we multiply a scale factor of 2p to plaintexts before the rounding operation to maintain their precision.

  • Encode(w;p). For \(\mathbf {w} \in {\mathbb {R}}^{k}\), output the polynomial \(m \leftarrow \phi (2^{p}\cdot \mathbf {w})\in {\mathcal R}\).

  • Decode(m;p). For a plaintext \(m \in {\mathcal R}\), the encoding of an array consisting of a power of two kN/2 messages, output the vector \(\mathbf {w} \leftarrow \phi ^{-1} (m / 2^{p}) \in {\mathbb {R}}^{k}\).

The encoding/decoding techniques support the parallel computation over encryption, yielding a better amortized timing. In addition, the HEAAN scheme provides the rotation operation on plaintext slots, i.e., it enables us to securely obtain an encryption of the shifted plaintext vector (wr,…,wk−1,w0,…,wr−1) from an encryption of (w0,…,wk−1). It is necessary to generate an additional public information rk, called the rotation key. We denote the rotation operation as follows.

  • Rotaterk(ct;r). For the rotation keys rk, output a ciphertext ct encrypting the rotated plaintext vector of ct by r positions.

Refer [14] for the technical details and noise analysis.

Database encoding

For an efficient computation, it is crucial to find a good encoding method for the given database. The HEAAN scheme supports the encryption of a plaintext vector and the slot-wise operations over encryption. However, our learning data is represented by a matrix (zij)1≤in,0≤jf. A recent work [13] used the column-wise approach, i.e., a vector of specific feature data (zij)1≤in is encrypted in a single ciphertext. Consequently, this method required (f+1) number of ciphertexts to encrypt the whole dataset.

In this subsection, we suggest a more efficient encoding method to encrypt a matrix in a single ciphertext. A training dataset consists of n samples \(\mathbf {z}_{i}\in {\mathbb {R}}^{f+1}\) for 1≤in, which can be represented as a matrix Z as follows:

$$Z=\left[\begin{array}{cccc} z_{10} & z_{11} & \cdots & z_{1f} \\ z_{20} & z_{21} & \cdots & z_{1f} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n0} & z_{n1} & \cdots & z_{nf} \end{array}\right].$$

For simplicity, we assume that n and (f+1) are power-of-two integers satisfying logn+ log(f+1)≤ log(N/2). Then we can pack the whole matrix in a single ciphertext in a row-by-row manner. Specifically, we will identify this matrix with the k-dimensional vector by (zij)1≤in,0≤jfw=(w)0≤<n·(f+1) where w=zij such that =(f+1)(i−1)+j, that is,

$$Z\mapsto \mathbf{w}=(z_{10},\dots,z_{1f},z_{20},\dots,z_{2f},\dots,z_{n0},\dots,z_{nf}).$$

In a general case, we can pad zeros to set the number of samples and the dimension of a weight vector as powers of two.

It is necessary to perform shifting operations of row and column vectors for the evaluation of the GD algorithm. In the rest of this subsection, we explain how to perform these operations using the rotation algorithm provided in the HEAAN scheme. As described above, the algorithm Rotate(ct;r) can shift the encrypted vector by r positions. In particular, this operation is useful in our implementation when r=f+1 or r=1. For the first case, a given matrix Z=(zij)1≤in,0≤jf is converted into the matrix

$$Z'=\left[\begin{array}{cccc} z_{20} & z_{21} & \cdots & z_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n0} & z_{n1} & \cdots & z_{nf} \\ z_{10} & z_{11} & \cdots & z_{1f} \end{array}\right],$$

while the latter case outputs the matrix

$$Z^{\prime\prime}=\left[\begin{array}{cccc} z_{11} & \cdots & z_{1f} & z_{20} \\ z_{21} & \cdots & z_{2f} & z_{30} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n1} & \cdots & z_{nf} & z_{10} \end{array}\right]$$

over encryption. The matrix Z is obtained from Z by shifting its row vectors and Z′′ can be viewed as an incomplete column shifting because of its last column.

Polynomial approximation of the sigmoid function

One limitation of the existing HE cryptosystems is that they only support polynomial arithmetic operations. The evaluation of the sigmoid function is an obstacle for the implementation of the logistic regression since it cannot be expressed as a polynomial.

Kim et al. [13] used the least squares approach to find a global polynomial approximation of the sigmoid function. We adapt this approximation method and consider the degree 3, 5, and 7 least squares polynomials of the sigmoid function over the domain [−8,8]. We observed that the inner product values \(\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\) in our experimentations belong to this interval. For simplicity, a least squares polynomial of σ(−x) will be denoted by g(x) so that we have \(g\left (\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\approx \sigma \left (-\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\) when \(\left |\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right |\le 8\). The approximate polynomials g(x) of degree 3, 5, and 7 are computed as follows:

$$\left\{\begin{array}{ll} g_{3}(x) & = 0.5-1.20096\cdot(x/8)+0.81562\cdot(x/8)^{3},\\ g_{5}(x) & = 0.5-1.53048\cdot(x/8)+2.3533056\cdot(x/8)^{3} \\ & \ \ \ -1.3511295\cdot(x/8)^{5}, \\ g_{7}(x) & = 0.5-1.73496\cdot(x/8)+4.19407\cdot(x/8)^{3} \\ & \ \ \ -5.43402\cdot(x/8)^{5}+2.50739\cdot(x/8)^{7}.\\ \end{array}\right.$$

A low-degree polynomial requires a smaller evaluation depth while a high-degree polynomial has a better precision. The maximum errors between σ(−x) and the least squares g3(x), g5(x), and g7(x) are approximately 0.114, 0.061 and 0.032, respectively.

Homomorphic evaluation of the gradient descent

This section explains how to securely train the logistic regression model using the HEAAN scheme. To be precise, we explicitly describe a full pipeline of the evaluation of the GD algorithm. We adapt the same assumptions as in the previous section so that the whole database can be encrypted in a single ciphertext.

First of all, a client encrypts the dataset and the initial (random) weight vector β(0) and sends them to the public cloud. The dataset is encoded to a matrix Z of size n×(f+1) and the weight vector is copied n times to fill the plaintext slots. The plaintext matrices of the resulting ciphertexts are described as follows:

$$\begin{aligned} \mathsf{ct}_{z}=& \texttt{Enc}\left[\begin{array}{cccc} z_{10} & z_{11} & \cdots & z_{1f} \\ z_{20} & z_{21} & \cdots & z_{1f} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n0} & z_{n1} & \cdots & z_{nf} \end{array}\right],\\ \\ \mathsf{ct}_{\beta}^{(0)}=& \texttt{Enc}\left[\begin{array}{cccc} \beta_{0}^{(0)} & \beta_{1}^{(0)} & \cdots & \beta_{f}^{(0)} \\ \beta_{0}^{(0)} & \beta_{1}^{(0)} & \cdots & \beta_{f}^{(0)} \\ \vdots & \vdots & \ddots & \vdots \\ \beta_{0}^{(0)} & \beta_{1}^{(0)} & \cdots & \beta_{f}^{(0)} \end{array}\right]. \end{aligned}$$

As mentioned before, both Z and β(0) are scaled by a factor of 2p before encryption to maintain the precision of plaintexts. We skip to mention the scaling factor in the rest of this section since every step will return a ciphertext with the scaling factor of 2p.

The public server takes two ciphertexts ctz and \(\mathsf {ct}_{\beta }^{(t)}\) and evaluates the GD algorithm to find an optimal modeling vector. The goal of each iteration is to update the modeling vector β(t) using the gradient of loss function:

$${\boldsymbol{\beta}}^{(t+1)} \leftarrow {\boldsymbol{\beta}}^{(t)} + \frac{\alpha_{t}}{n}\sum_{i=1}^{n} \sigma\left(-\mathbf{z}_{i}^{T} {\boldsymbol{\beta}}^{(t)}\right) \cdot \mathbf{z}_{i}$$

where αt denotes the learning rate at the t-th iteration. Each iteration consists of the following eight steps.

Step 1: For given two ciphertexts ctz and \(\mathsf {ct}_{\beta }^{(t)}\), compute their multiplication and rescale it by p bits:

$$\mathsf{ct}_{1} \leftarrow \texttt{ReScale}\left(\texttt{Mult}\left(\mathsf{ct}^{(t)}_{\beta}, \mathsf{ct}_{z}\right); p\right).\\ $$

The output ciphertext contains the values \(z_{ij} \cdot \beta ^{(t)}_{j}\) in its plaintext slots, i.e.,

$$\mathsf{ct}_{1}=\texttt{Enc}\left[\begin{array}{cccc} z_{10}\cdot\beta_{0}^{(t)} & z_{11}\cdot\beta_{1}^{(t)} & \cdots & z_{1f}\cdot\beta_{f}^{(t)} \\ z_{20}\cdot\beta_{0}^{(t)} & z_{21}\cdot\beta_{1}^{(t)} & \cdots & z_{1f}\cdot\beta_{f}^{(t)} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n0}\cdot\beta_{0}^{(t)} & z_{n1}\cdot\beta_{1}^{(t)} & \cdots & z_{nf}\cdot\beta_{f}^{(t)} \end{array}\right].$$

Step 2: To obtain the inner product \(\mathbf {z}_{i}^{T} {\boldsymbol {\beta }}^{(t)}\), the public cloud aggregates the values of \(z_{ij}\beta _{j}^{(t)}\) in the same row. This step can be done by adapting the incomplete column shifting operation.

One simple way is to repeat this operation (f+1) times, but the computational cost can be reduced down to log(f+1) by adding ct1 to its rotations recursively:

$$\mathsf{ct}_{1} \leftarrow \texttt{Add}\left(\mathsf{ct}_{1}, \texttt{Rotate}\left(\mathsf{ct}_{1}; 2^{j}\right)\right),$$

for j=0,1,…, log(f+1)−1. Then the output ciphertext ct2 encrypts the inner product values \(\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\) in the first column and some “garbage” values in the other columns, denoted by , i.e.,

$$\mathsf{ct}_{2}=\texttt{Enc}\left[\begin{array}{cccc} \mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)} & \star & \cdots & \star \\ \mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)} & \star & \cdots & \star \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)} & \star & \cdots & \star \end{array}\right].$$

Step 3: This step performs a constant multiplication in order to annihilate the garbage values. It can be obtained by computing the encoding polynomial c←Encode(C;pc) of the matrix

$$C=\left[\begin{array}{cccc} 1 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 0 & \cdots & 0 \end{array}\right],$$

using the scaling factor of \(\phantom {\dot {i}\!}2^{p_{c}}\) for some integer pc. The parameter pc is chosen as the bit precision of plaintexts so it can be smaller than the parameter p.

Finally we multiply the polynomial c to the ciphertext ct2 and rescale it by pc bits:

$$\begin{array}{*{20}l} \mathsf{ct}_{3} &\leftarrow \texttt{ReScale}(\texttt{CMult}(\mathsf{ct}_{2}; c); p_{c}). \end{array} $$

The garbage values are multiplied with zero while one can maintain the inner products in the plaintext slots. Hence the output ciphertext ct3 encrypts the inner product values in the first column and zeros in the others:

$$\mathsf{ct}_{3}=\texttt{Enc}\left[\begin{array}{cccc} \mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)} & 0 & \cdots & 0 \\ \mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)} & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)} & 0 & \cdots & 0 \end{array}\right].$$

Step 4: The goal of this step is to replicate the inner product values to other columns. Similar to Step 2, it can be done by adding the input ciphertext to its column shifting recursively, but in the opposite direction

$$\mathsf{ct}_{3} \leftarrow \texttt{Add}\left(\mathsf{ct}_{3}, \texttt{Rotate}\left(\mathsf{ct}_{3}; -2^{j}\right)\right)$$

for j=0,1,…, log(f+1)−1. The output ciphertext ct4 has the same inner product value in each row:

$$\mathsf{ct}_{4}=\texttt{Enc}\left[\begin{array}{cccc} \mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)} & \mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)} & \cdots & \mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)} \\ \mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)} & \mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)} & \cdots & \mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)} & \mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)} & \cdots & \mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)} \end{array}\right].$$

Step 5: This step simply evaluates an approximating polynomial of the sigmoid function, i.e., ct5g(ct4) for some g{g3,g5,g7}. The output ciphertext encrypts the values of \(g\left (\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\) in its plaintext slots:

$$\mathsf{ct}_{5}=\texttt{Enc}\left[\begin{array}{ccc} g\left(\mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)}\right) & \cdots & g\left(\mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)}\right) \\ g\left(\mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)}\right) & \cdots & g\left(\mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)}\right) \\ \vdots & \ddots & \vdots \\ g\left(\mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)}\right) & \cdots & g\left(\mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)}\right) \end{array}\right].$$

Step 6: The public cloud multiplies the ciphertext ct5 with the encrypted dataset ctz and rescales the resulting ciphertext by p bits:

$$\begin{array}{*{20}l} \mathsf{ct}_{6} &\leftarrow \texttt{ReScale}(\texttt{Mult}(\mathsf{ct}_{5}, \mathsf{ct}_{z}); p). \end{array} $$

The output ciphertext encrypts the n vectors \(g\left (\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\cdot \mathbf {z}_{i}\) in each row:

$$\mathsf{ct}_{6}=\texttt{Enc}\left[\begin{array}{ccc} g\left(\mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{10} & \cdots & g\left(\mathbf{z}_{1}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{1f} \\ g\left(\mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{20} & \cdots & g\left(\mathbf{z}_{2}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{2f} \\ \vdots & \ddots & \vdots \\ g\left(\mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)}\right) \cdot z_{n0} & \cdots & g\left(\mathbf{z}_{n}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{nf} \end{array}\right].$$

Step 7: This step aggregates the vectors \(g\left (\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\) to compute the gradient of the loss function. It is obtained by recursively adding ct6 to its row shifting:

$$\mathsf{ct}_{6} \leftarrow \texttt{Add}\left(\mathsf{ct}_{6}, \texttt{Rotate}\left(\mathsf{ct}_{6}; 2^{j}\right)\right)$$

for j= log(f+1),…, log(f+1)+ logn−1. The output ciphertext is

$${}\mathsf{ct}_{7}=\texttt{Enc}\left[\begin{array}{ccc} \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{i0} & \cdots & \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{if} \\ \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{i0} & \cdots & \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{if} \\ \vdots & \ddots & \vdots \\ \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{i0} & \cdots & \sum_{i} g\left(\mathbf{z}_{i}^{T}{\boldsymbol{\beta}}^{(t)}\right)\cdot z_{if} \\ \end{array}\right],$$

as desired.

Step 8: For the learning rate αt, it uses the parameter pc to compute the scaled learning rate \(\phantom {\dot {i}\!}\Delta ^{(t)}= \lfloor {{2^{p_{c}}\cdot \alpha _{t}}}\rceil \). The public cloud updates β(t) using the ciphertext ct7 and the constant Δ(t):

$$\begin{array}{*{20}l} &\mathsf{ct}_{8} \leftarrow \texttt{ReScale}\left(\Delta^{(t)}\cdot \mathsf{ct}_{7}; p_{c}\right),\\ &\mathsf{ct}_{\beta}^{(t+1)}\leftarrow \texttt{Add}\left(\mathsf{ct}_{\beta}^{(t)}, \mathsf{ct}_{8}\right). \end{array} $$

Finally it returns a ciphertext encrypting the updated modeling vector

$$\mathsf{ct}_{\beta}^{(t+1)}=\texttt{Enc}\left[\begin{array}{cccc} \beta_{0}^{(t+1)} & \beta_{1}^{(t+1)} & \cdots & \beta_{f}^{(t+1)} \\ \beta_{0}^{(t+1)} & \beta_{1}^{(t+1)} & \cdots & \beta_{f}^{(t+1)} \\ \vdots & \vdots & \ddots & \vdots \\ \beta_{0}^{(t+1)} & \beta_{1}^{(t+1)} & \cdots & \beta_{f}^{(t+1)} \end{array}\right].$$

where \(\beta _{j}^{(t+1)}=\beta _{j}^{(t)}+\frac {\alpha _{t}}{n}\sum _{i} g\left (\mathbf {z}_{i}^{T}{\boldsymbol {\beta }}^{(t)}\right)\cdot z_{ij}\).

Homomorphic evaluation of Nesterov’s accelerated gradient

The performance of leveled HE schemes highly depends on the depth of a circuit to be evaluated. The bottleneck of homomorphic evaluation of the GD algorithm is that we need to repeat the update of weight vector β(t) iteratively. Consequently, the total depth grows linearly on the number of iterations and it should be minimized for practical implementation.

For the homomorphic evaluation of Nesterov’s accelerated gradient, a clients sends one more ciphertext \(\mathsf {ct}_{v}^{(0)}\) encrypting the initial vector v(0) to the public cloud. Then the server uses an encryption ctz of dataset Z to update two ciphertexts v(t) and \(\mathsf {ct}_{\beta }^{(t)}\) at each iteration. One can securely compute β(t+1) in the same way as the previous section. Nesterov’s accelerated gradient requires one more step to compute the second equation of (1) and obtain an encryption of v(t+1) from \(\mathsf {ct}_{\beta }^{(t)}\) and \(\mathsf {ct}_{\beta }^{(t+1)}\).

Step 9: Let \(\Delta ^{(t)}_{1} = \lfloor {{2^{p_{c}}\cdot \gamma _{t}}}\rceil \) and let \(\Delta ^{(t)}_{2} = 2^{p_{c}}-\Delta ^{(t)}_{1}\). It obtains the ciphertext \(\mathsf {ct}_{v}^{(t+1)}\) by computing

$$\begin{array}{*{20}l} \mathsf{ct}_{v}^{(t+1)} &\leftarrow \texttt{Add}\left(\Delta^{(t)}_{2}\cdot\mathsf{ct}_{\beta}^{(t+1)}, \Delta^{(t)}_{1}\cdot\mathsf{ct}_{\beta}^{(t)}\right),\\ \mathsf{ct}_{v}^{(t+1)} &\leftarrow \texttt{ReScale}\left(\mathsf{ct}_{v}^{(t+1)}; p_{c}\right). \end{array} $$

Then the output ciphertext is

$$\mathsf{ct}_{v}^{(t+1)}=\texttt{Enc}\left[\begin{array}{cccc} v_{0}^{(t+1)} & v_{1}^{(t+1)} & \cdots & v_{f}^{(t+1)} \\ v_{0}^{(t+1)} & v_{1}^{(t+1)} & \cdots & v_{f}^{(t+1)} \\ \vdots & \vdots & \ddots & \vdots \\ v_{0}^{(t+1)} & v_{1}^{(t+1)} & \cdots & v_{f}^{(t+1)} \end{array}\right],$$

which encrypts \(v_{j}^{(t+1)}=(1 - \gamma _{t})\cdot \beta ^{(t+1)}_{j} + \gamma _{t}\cdot \beta ^{(t)}_{j}\) in the plaintext slots.

Results

In this section, we present parameter sets with experimental results. Our implementation is based on the HEAAN library [21] that implements the approximate HE scheme of Cheon et al. [14]. The source code is publicly available at github [16].

Parameters settings

We explain how to choose the parameter sets for the homomorphic evaluation of the (Nesterov’s) GD algorithm with security analysis. We start with the parameter L - the bitsize of a fresh ciphertext modulus. The modulus of a ciphertext is reduced after the ReScale operations and the evaluation of an approximate polynomial g(x).

The ReScale procedures after homomorphic multiplications (step 1 and 6) reduce the ciphertext modulus by p bits while the ReScale procedures after constant multiplications (step 3 and 8) require pc bits of modulus reduction. Note that the ciphertext modulus remains the same for the step 9 for the Nesterov’s accelerated gradient if we compute step 8 and 9 together using some precomputed constants. We use a similar method with a previous work for the evaluation of the sigmoid function (see [13] for details); the ciphertext modulus is reduced by (2p+3) bits for the evaluation of g3(x), and (3p+3) bits for that of g5(x) and g7(x). Therefore, we obtain the following lower bound on the parameter L:

$$L=\left\{\begin{array}{ll} \text{\textsc{IterNum}} \cdot (3 p + 2 p_{c} + 3) + L_{0} & g = g_{3}, \\ \text{\textsc{IterNum}} \cdot (4 p + 2 p_{c} + 3) + L_{0} & g \in \{g_{5}, g_{7}\}, \end{array}\right. $$

where ITERNUM is the number of iterations of the GD algorithm and L0 denotes the bit size of the output ciphertext modulus. The modulus of the output ciphertext should be larger than 2p in order to encrypt the resulting weight vector and maintain its precision. We take p=30, pc=20 and L0=35 in our implementation.

The dimension of a cyclotomic ring \({\mathcal {R}}\) is chosen as N=216 following the security estimator of Albrecht et al. [22] for the learning with errors problem. In this case, the bit size L of a fresh ciphertext modulus should be bounded by 1284 to ensure the security level λ=80 against known attacks. Hence we repeat ITERNUM=9 iterations of GD algorithm g=g3, and ITERNUM=7 iterations when g=g5 or g=g7.

The smoothing parameter γt is chosen in accordance with [15]. The choice of proper GD learning rate parameter αt normally depends on the problem at hand. Choosing too small αt leads to a slow convergence, and choosing too large αt could lead to a divergence, or a fluctuation near a local optima. It is often optimized by a trial and error method, which we are not available to perform. Under these conditions harmonic progression seems to be a good candidate and we choose a learning rate \(\alpha _{t} = \frac {10}{t+1}\) in our implementation.

Implementation

All the experimentations were performed on a machine with an Intel Xeon CPU E5-2620 v4 at 2.10 GHz processor.

Task for the iDASH challenge. In genomic data privacy and security protection competition 2017, the goal of Track 3 was to devise a weight vector to predict the disease using the genotype and phenotype data (Additional file 1: iDASH). This dataset consists of 1579 samples, each of which has 18 features and a cohort information (disease vs. healthy). Since we use the ring dimension N=216, we can only pack up to N/2=215 dataset values in a single ciphertext but we have totally 1579×19>215 values to be packed. We can overcome this issue by dividing the dataset into two parts of sizes 1579×16 and 1579×3 and encoding them separately into two ciphertexts. In general, this method can be applied to the datasets with any number of features: the dataset can be encrypted into (f+1)·n·(N/2)−1 ciphertexts.

In order to estimate the validity of our method, we utilized 10-fold cross-validation (CV) technique: it randomly partitions the dataset into ten folds with approximately equal sizes, and uses every subset of 9 folds for training and the rest one for testing the model. The performance of our solution including the average running time per fold of 10-fold CV (encryption and evaluation) and the storage (encrypted dataset) are shown in Table 1. This table also provides the average accuracy and the AUC (Area Under the Receiver Operating Characteristic Curve) which estimate the quality of a binary classifier.

Table 1 Implementation results for iDASH dataset with 10-fold CV

Comparison We present some experimental results to compare the performance of implementation to [13]. For a fair comparison, we use the same 5-fold CV technique on five datasets - the Myocardial Infarction dataset from Edinburgh [23] (Additional file 2: Edinburgh), Low Birth Weight Study (Additional file 3: lbw), Nhanes III (Additional file 4: nhanes3), Prostate Cancer Study (Additional file 5: pcs), and Umaru Impact Study datasets (Additional file 6: uis) [2427]. All datasets have a single binary outcome variable.

All the experimental results are summarized in Table 2. Our new packing method could reduce the storage of ciphertexts and the use of Nesterov’s accelerated gradient achieves much higher speed than the approach of [13]. For example, it took 3.6 min to train a logistic regression model using the encrypted Edinburgh dataset of size 0.02 GB, compared to 114 min and 0.69 GB of the previous work [13], while achieving the good qualities of the output models.

Table 2 Implementation results for other datasets with 5-fold CV

Discussion

The rapid growth of computing power initiated the study of more complicated ML algorithms in various fields including biomedical data analysis [28, 29]. HE system is a promising solution for the privacy issue, but its efficiency in real applications remains as an open question. It would be great if we could extend this work to other ML algorithms such as deep learning.

One constraint in our approach is that the number of iterations of GD algorithm is limited depending on the choice of HE parameter. In terms of asymptotic complexity, applying the bootstrapping method of approximate HE scheme [30] to the GD algorithm would achieve a linear computation cost on the iteration number.

Conclusion

In the paper, we presented a solution to homomorphically evaluate the learning phase of logistic regression model using the gradient descent algorithm and the approximate HE scheme. Our solution demonstrates a good performance and the quality of learning is comparable to the one of an unencrypted case. Our encoding method can be easily extended to a large-scale dataset, which shows the practical potential of our approach.

Abbreviations

AUC:

Area under the receiver operating characteristic curve

CV:

Cross validation

GD:

Gradient descent

HE:

Homomorphic encryption

ML:

Machine learning

References

  1. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959; 3(3):210–29.

    Article  Google Scholar 

  2. Dietz E. Application of logistic regression and logistic discrimination in medical decision making. Biom J. 1987; 29(6):747–51.

    Article  Google Scholar 

  3. Rousseau D. Biomedical Research: Changing the Common Rule by David Rousseau – Ammon & Rousseau Translations. 2017. https://www.ammon-rousseau.com/changing-the-rules-by-david-rousseau/ [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spHgiYRI.

  4. Nikolaenko V, Weinsberg U, Ioannidis S, Joye M, Boneh D, Taft N. Privacy-preserving ridge regression on hundreds of millions of records. In: Security and Privacy (SP), 2013 IEEE Symposium On. IEEE: 2013. p. 334–48.

  5. Yao AC-C. How to generate and exchange secrets. In: Foundations of Computer Science, 1986., 27th Annual Symposium On. IEEE: 1986. p. 162–7.

  6. El Emam K, Samet S, Arbuckle L, Tamblyn R, Earle C, Kantarcioglu M. A secure distributed logistic regression protocol for the detection of rare adverse drug events. J Am Med Inform Assoc. 2012; 20(3):453–61.

    Article  Google Scholar 

  7. Nardi Y, Fienberg SE, Hall RJ. Achieving both valid and secure logistic regression analysis on aggregated data from different private sources. J Priv Confidentiality. 2012; 4(1):9.

    Google Scholar 

  8. Mohassel P, Zhang Y. SecureML: A System for Scalable Privacy-Preserving Machine Learning. IEEE Symp Secur Priv. 2017.

  9. Wu S KH, Teruya T, Kawamoto J, Sakuma J. Privacy-preservation for stochastic gradient descent application to secure logistic regression. 27th Annu Conf Japan Soc Artif Intell. 2013;1–4.

  10. Paillier P. Public-key cryptosystems based on composite degree residuosity classes. In: International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 1999. p. 223–38.

  11. Aono Y, Hayashi T, Trieu Phong L, Wang L. Scalable and secure logistic regression via homomorphic encryption. In: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. ACM: 2016. p. 142–4.

  12. Xie W, Wang Y, Boker SM, Brown DE. Privlogit: Efficient privacy-preserving logistic regression by tailoring numerical optimizers. arXiv preprint arXiv:1611.01170. 2016.

  13. Kim M, Song Y, Wang S, Xia Y, Jiang X. Secure logistic regression based on homomorphic encryption: Design and evaluation. JMIR Med Inform. 2018; 6(2).

    Article  Google Scholar 

  14. Cheon JH, Kim A, Kim M, Song Y. Homomorphic encryption for arithmetic of approximate numbers. In: Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Application of Cryptology and Information Security. Springer: 2017. p. 409–37.

  15. Nesterov Y. A method of solving a convex programming problem with convergence rate o (1/k2). In: Soviet Mathematics Doklady, vol. 27: 1983. p. 372–6.

  16. Cheon JH, Kim A, Kim M, Lee K, Song Y. Implementation for iDASH Competition 2017. 2017. https://github.com/kimandrik/HEML [Accessed 11 July 2018] Available from: http://www.webcitation.org/70qbe6xii.

  17. Harrell FE. Ordinal logistic regression. In: Regression Modeling Strategies. Springer: 2001. p. 331–43.

  18. Lowrie EG, Lew NL. Death risk in hemodialysis patients: the predictive value of commonly measured variables and an evaluation of death rate differences between facilities. Am J Kidney Dis. 1990; 15(5):458–82.

    Article  CAS  Google Scholar 

  19. Lewis CM, Knight J. Introduction to genetic association studies. Cold Spring Harb Protocol. 2012; 2012(3):068163.

    Google Scholar 

  20. Gayle V, Lambert PS. Logistic regression models in sociological research. 2009.

  21. Cheon JH, Kim A, Kim M, Song Y. Implementation of HEAAN. 2016. https://github.com/kimandrik/HEAAN [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spMzVJ6U.

  22. Albrecht MR, Player R, Scott S. On the concrete hardness of learning with errors. J Math Cryptol. 2015; 9(3):169–203.

    Article  Google Scholar 

  23. Kennedy R, Fraser H, McStay L, Harrison R. Early diagnosis of acute myocardial infarction using clinical and electrocardiographic data at presentation: derivation and evaluation of logistic regression models. Eur Heart J. 1996; 17(8):1181–91.

    Article  CAS  Google Scholar 

  24. lbw: Low Birth Weight study data. 2017. https://rdrr.io/rforge/LogisticDx/man/lbw.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNFX2b5.

  25. nhanes, 3: NHANES III data. 2017. https://rdrr.io/rforge/LogisticDx/man/nhanes3.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNJJFDx.

  26. pcs: Prostate Cancer Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/pcs.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNLXr5a.

  27. uis: UMARU IMPACT Study data. 2017. https://rdrr.io/rforge/LogisticDx/man/uis.html [Accessed 19 Aug 2017] Available from: http://www.webcitation.org/6spNOLB9n.

  28. Wang Y. Application of deep learning to biomedical informatics. Int J Appl Sci Res Rev. 2016.

  29. Ravì D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, Yang G-Z. Deep learning for health informatics. IEEE J Biomed Health Inform. 2017; 21(1):4–21.

    Article  Google Scholar 

  30. Cheon JH, Han K, Kim A, Kim M, Song Y. Bootstrapping for approximate homomorphic encryption. In: Advances in Cryptology–EUROCRYPT 2018: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer: 2018. p. 360–84.

Download references

Acknowledgements

The authors would like to thank the editor and reviewers for the thoughtful comments and constructive suggestions, which greatly helped us improve the quality of this manuscript. The authors also thank Jinhyuck Jeong for giving valuable comments to the technical part of the manuscript.

Funding

This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.B0717-16-0098) and by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No.2017R1A5A1015626).

MK was supported in part by NIH grants U01TR002062 and U01EB023685. Publication of this article has been funded by the NRF Grant funded by the Korean Government (MSIT) (No.2017R1A5A1015626).

Availability of data and materials

All datasets are available in the Additional files provided with the publication. The HEAAN library is available at https://github.com/kimandrik/HEAAN. Our implementation is available at https://github.com/kimandrik/HEML.

About this supplement

This article has been published as part of BMC Medical Genomics Volume 11 Supplement 4, 2018: Proceedings of the 6th iDASH Privacy and Security Workshop 2017. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-11-supplement-4.

Author information

Authors and Affiliations

Authors

Contributions

JHC designed and supervised the study. KL analyzed the data. AK drafted the source code and MK optimized it. AK and MK performed the experiments. AK and YS are major contributors in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yongsoo Song.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

All authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1

iDASH. iDASH challenge dataset (TXT 59 kb)

Additional file 2

Edinburgh. The Myocardial Infarction dataset from Edinburgh (TXT 24 kb)

Additional file 3

lbw. Low Birth Weight Study dataset (TXT 4 kb)

Additional file 4

nhanes3. Nhanes III dataset (TXT 745 kb)

Additional file 5

pcs. Prostate Cancer Study dataset (TXT 9 kb)

Additional file 6

uis. Umaru Impact Study dataset (TXT 11 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, A., Song, Y., Kim, M. et al. Logistic regression model training based on the approximate homomorphic encryption. BMC Med Genomics 11 (Suppl 4), 83 (2018). https://doi.org/10.1186/s12920-018-0401-7

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12920-018-0401-7

Keywords