Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In traditional encryption, a receiver in possession of a ciphertext either has a corresponding decryption key for it, in which case it can recover the underlying message, or else it can get no information about the underlying message. Functional encryption (FE) [10, 21, 26, 32] is a vast new paradigm for encryption in which the decryption keys are associated to functions, whereby a receiver in possession of a ciphertext and a decryption key for a particular function can recover that function of the underlying message. Intuitively, security requires that it learns nothing else. Due to both theoretical appeal and practical importance, FE has gained tremendous attention in recent years.

In particular, this work concerns a compelling extension of FE called multi-input functional encryption (MIFE), introduced by Goldwasser et al. [25]. In MIFE, decryption operates on multiple ciphertexts, such that a receiver with some decryption key is able to recover the associated function applied to all of the underlying plaintexts (i.e., the underlying plaintexts are all arguments to the associated function). MIFE enables an number of important applications not handled by standard (single-input) FE. On the theoretical side, MIFE has interesting applications to non-interactive secure multiparty computation [7]. On the practical side, we reproduce the following example from [25].

Running SQL queries over encrypted data: Suppose we have an encrypted database. A natural goal in this scenario would be to allow a party Alice to perform a certain class of general SQL queries over this database. If we use ordinary functional encryption, Alice would need to obtain a separate secret key for every possible valid SQL query, a potentially exponentially large set. Multi-input functional encryption allows us to address this problem in a flexible way. We highlight two aspects of how Multi-Input Functional Encryption can apply to this example:

  • Let f be the function where f(qx) first checks if q is a valid SQL query from the allowed class, and if so f(qx) is the output of the query q on the database x. Now, if we give the decryption key corresponding to f and the encryption key \(ek_1\) (corresponding to the first input of the function f) to Alice, then Alice can choose a valid query q and encrypt it under her encryption key \(EK_1\) to obtain ciphertext \(c_1\). Then she could use her decryption key on ciphertexts \(c_1\) and \(c_2\), where \(c_2\) is the encrypted database, to obtain the results of the SQL query.

  • Furthermore, if our application demanded that multiple users add or manipulate different entries in the database, the most natural way to build such a database would be to have different ciphertexts for each entry in the database. In this case, for a database of size n, we could let f be an \((n+1)\)-ary function where \(f(q,x_1, \dots , x_n)\) is the result of a (valid) SQL query q on the database \((x_1, \dots , x_n)\).

Goldwasser et al. [25] discuss various other application of MIFE to non-interactive differentially private data release, delegation of computation, and, computing over encrypted streams, etc. We refer the reader to [25] for a more complete treatment. Besides motivating the notion, Goldwasser et al. [25] gave various flavors of definitions for MIFE and its security, as well as constructions based on different forms of program obfuscation. First of all, we note a basic observation about MIFE: in the public-key setting, functions for which one can hope to have any security at all are limited. In particular, a dishonest decryptor in possession of public key \(\mathsf {PP}\), a secret key \(\mathsf {SK}_f\) for (say) a binary function f, and ciphertext \(\mathsf {CT}\) encrypting message m, can try to learn m by repeatedly choosing some \(m'\) and learning \(f(m,m')\), namely by encrypting \(m'\) under \(\mathsf {PP}\) to get \(\mathsf {CT}'\) and decrypting \(\mathsf {C},\mathsf {C}'\) under \(\mathsf {SK}_f\). This means one can only hope for a very weak notion of security in such a case. As a result, in this work we focus on a more general setting where the functions have say a fixed arity n and there are encryption keys \(\mathsf {EK}_1,\ldots ,\mathsf {EK}_n\) corresponding to each index (i.e., \(\mathsf {EK}_i\) is used to encrypt a message which can then be used as an i-th argument in any function via decryption with the appropriate key). Only some subset of these keys (or maybe none of them) are known to the adversary. Note that this subsumes both the public key and the secret key setting (in which a much more meaningful notion of security maybe possible). In this setting, [25] presented an MIFE scheme based on indistinguishability obfuscation (iO) [6, 21].

Bounded-message security: The construction of Goldwasser et al. [25] based on iO has a severe shortcoming namely that it could only support security for an encryption of an a priori bounded number of messagesFootnote 1. This bound is required to be fixed at the time of system setup and, if exceeded, would result in the guarantee of semantic security not holding any longer. In other words, the number of challenge messages chosen by the adversary in the security game needed to be a priori bounded. The size of the public parameters in [25] grows linearly with the number of challenge messages.

Now we go back to the previous example of running SQL queries over encrypted databases where each entry in the database is encrypted individually. This bound would mean that the number of entries in the database would be bounded at the time of the system setup. Also, the number of updates to the database would be bounded as well. Similar restrictions would apply in other applications of MIFE: e.g., while computing over encrypted data streams, the number of data streams would have to be a priori bounded, etc. In addition, the construction of Goldwasser et al. [25] could only support Selective-security: The challenge messages and the set of “corrupted” encryption keys needed by the adversary is given out at the beginning of the experiment.Footnote 2

Let us informally refer to an MIFE construction that does not have these shortcomings as unbounded-message secure or simply fully-secure. In addition to the main construction based on iO, Goldwasser et al. [25] also showed a construction of adaptively-secure MIFE (except wrt. the subset of encryption keys given to the adversary, so we still do not call it fully-secure) that relies on a stronger form of obfuscation called differing-inputs obfuscation (\(di\mathcal {O}\)) [1, 6, 12].Footnote 3 Roughly, \(di\mathcal {O}\) says that for any two circuits \(C_0\) and \(C_1\) for which it is hard to find an input on which their outputs differ, it should be hard to distinguish their obfuscations, and moreover given such a distinguisher one can extract such a differing input. Unfortunately, due to recent negative results [22], \(di\mathcal {O}\) is now viewed as an implausible assumption. The main question we are concerned with in this work is: Can fully-secure MIFE can be constructed from \(i\mathcal {O}\) ?

1.1 Our Contributions

Our main result is a fully-secure MIFE scheme from sub-exponentially secure \(i\mathcal {O}\). More specifically, we use the following primitives: (1) sub-exponentially secure \(i\mathcal {O}\), (2) sub-exponentially secure injective one-way functions, and (3) standard public-key encryption (PKE). Here “sub-exponential security” refers to the fact that advantage of any (efficient) adversary should be sub-exponentially small. For primitive (2), this should furthermore hold against adversaries running in sub-exponential time.

A few remarks about these primitives are in order. First, the required security will depend on the function arity, but not on the number of challenge messages. Indeed, Goldwasser et al. already point out that selective-security (though not bounded-message security, which instead has to do with their use of statistically sound non-interactive proofs) of their MIFE scheme based on \(i\mathcal {O}\) can be overcome by standard complexity leveraging. However, in that case the required security level would depend on the the number of challenge messages. As in most applications we expect the number of challenge messages to be orders of magnitude larger than the function arity, this would result in much larger parameters than our scheme. Second, we only use a sub-exponentially secure injective one-way function (i.e., primitive (2)) in our security proof, not in the scheme itself. Thus it suffices for such an injective one-way function to simply exist for security of our MIFE scheme, even if we do not know an explicit candidate.

1.2 Our Techniques

The starting point of our construction is the fully-secure construction of MIFE based on \(di\mathcal {O}\) due to Goldwasser et al. [25] mentioned above. In their scheme, the encryption key for an index \(i \in [n]\) (where n is the function arity) is a pair of public keys \((pk^0_i, pk^1_i)\) for an underlying PKE scheme, and a ciphertext for index i consists of encryptions of the plaintext under \(pk^0_i,pk^1_i\) respectively, along with a simulation-sound non-interactive zero knowledge proof that the two ciphertexts are well-formed (i.e., both encrypting the same underlying message). The secret key for a function f is an obfuscation of a program that takes as input n ciphertext pairs with proofs \((c^0_1,c^1_1,\pi _1),\ldots ,(c^0_n,c^1_n,\pi _n)\), and, if the proofs verify, decrypts the first ciphertext from each pair using the corresponding secret key, and finally outputs f applied to the resulting plaintexts. Note that it is important for the security proof to assume \(di\mathcal {O}\), since one needs to argue when the function keys are switched to decrypting the second ciphertext in each pair instead, an adversary who detects the change can be used to extract a false proof.

We will develop modifications that this scheme so that we can instead leverage a result of [12] that any indistinguishability obfuscator is in fact a differing-inputs obfuscator on circuits which differ on polynomially many points. In fact, we we will only need to use this result for circuits which differ on a single point. But, we will need to require the extractor to work given an adversary with even exponentially-small distinguishing gap on the obfuscations of two such circuits, due to the exponential number of hybrids in our security proof. Fortunately, [17] showed the result of [12] extends to this case of we start with an indistinguishability obfuscator that is sub-exponentially secure.

Specifically, we need to make the proofs of well-formedness described above unique for every ciphertext pair, so that there is only one differing input point in the corresponding hybrids in our security proof. To achieve this, we design novel “special-purpose” proofs built from \(i\mathcal {O}\) and punctured pseudorandom functions (PRFs) [11, 13, 29],Footnote 4 which works as follows. We include in the public parameters an obfuscated program that takes as input two ciphertexts and a witness that they are well-formed (i.e., the message and randomness used for both the ciphertexts), and, if this check passes, outputs a (puncturable) PRF evaluation on those ciphertexts. Additionally, the secret key for a function f will now be an obfuscation of a program which additionally has this PRF key hardwired keys and verifies the “proofs” of well-formedness by checking that PRF evaluations are correct. Interestingly, in the security proof, we will switch to doing this check via an injective one-way function applied to the PRF values (i.e., the PRF values themselves are not compared, but rather the outputs of an injective one-way function applied to them). This is so that extracting a differing input at this step in the security proof will correspond to inverting an injective one-way function; otherwise, the correct PRF evaluation would still be hard-coded in the obfuscated function key and we do not know how to argue security.

We now sketch the sequence of hybrids in our security proof. The proof starts from a hybrid where each challenge ciphertext encrypts \(m^0_i\) for \(i \in [n]\). Then we switch to a hybrid where each \(c^1_i\) is an encryption of \(m^1_i\) instead. These two hybrids are indistinguishable due to security of the PKE scheme. Let \(\ell \) denote the length of a ciphertext. For each index \(i \in [n]\) we define hybrids indexed by x, for all \(x \in [2^{2n\ell }],\) in which function key \(\mathsf {SK}_f\) decrypts the first ciphertext in the pair using \(\mathsf {SK}^0_i\) when \((c^0_1,c^1_1,..,c^0_n,c^1_n) < x\) and decrypts the second ciphertext in the pair using \(\mathsf {SK}^1_i\) otherwise. Parse \(x=(x^0_{1},x^1_1,..,x^0_{n},x^1_n)\). Hybrids indexed by x and \(x + 1\) can be proven indistinguishable as follows: We first switch to sub-hybrids that puncture the PRF key at \( \lbrace x^0_i,x^1_i \rbrace \), changes a function key \(\mathsf {SK}_f\) to check correctness of an PRF value by applying an injective one-way function as described above, and hard-coded the output of the injective one-way function at the PRF evaluation at the punctured point. Now if the two hybrids differ at an input of the form \((x^0_1,x^1_1,\alpha _1,..,x^0_n,x^1_n,\alpha _n)\) where \(\alpha _i\) is some fixed value (a PRF evaluation of \((x^0_i,x^1_i)\)), extracting the differing input can be used to invert the injective one-way function on random input (namely the \(\alpha _i\)).

Finally, we note that exponentially many hybrids are indexed by all possible ciphertext vectors that could be input to decryption (i.e., vectors of length the arity of the functionality) and not all possible challenge ciphertext vectors. This allows us to handle any unbounded (polynomial) number of ciphertexts for every index.

Our techniques further demonstrate the power of the exponentially-many hybrids technique, together with the iO \(\Rightarrow \) one-point-diO, which have also been used recently in works such as [8, 17].

1.3 Related Work, Open Problems

In this work we focus on an indistinguishability-based security notion for MIFE. This is justified as Goldwasser et al. [25] show that an MIFE meeting a stronger simulation-based security definition in general implies black-box obfuscation [6] and hence is impossible. They also point out that in the secret-key setting with small function arity, an MIFE scheme meeting indistinguishability-based security notion can be “compiled” into a simulation-secure one, following the work of De Caro et al. [16]; in such a setting we can therefore achieve simulation-based security as well. We note that a main problem left open by our work is whether \(i\mathcal {O}\) without sub-exponential security implies MIFE, which would in some sense show these two primitives are equivalent (up to the other primitives used in the construction). Another significant open problem is removing the bound a function’s arity in our construction, as well as the bound on the message length, perhaps by building on recent work in the setting of single-input FE [30].

Initial constructions of single-input FE from \(i\mathcal {O}\) [21] also had the shortcomings we are concerned with removing for constructions of MIFE in this work, namely selective and bounded-message security. These restrictions were similarly first overcome using differing-inputs obfuscation [1, 12], and later removed while only relying on \(i\mathcal {O}\) [2, 33]. Unfortunately, we have not been able to make the techniques of these works apply to the MIFE setting, which is why we have taken a different route. If they could, this would be a path towards solving the open problem of relying on \(i\mathcal {O}\) with standard security mentioned above.

[14] construct an adaptively secure multi-input functional encryption scheme in the secret key setting for any number of ciphertexts from any secret key functional encryption scheme. Their construction builds on a clever observation that function keys of a secret-key function-hiding functional encryption can be used to hide any message. This provides a natural ‘arity amplification’ procedure that allows us to go from a t arity secret key MIFE to a \(t+1\) arity MIFE. However, because the arity is amplified one by one, it leads to a blow up in the scheme, so the arity of the functions had to be bounded by \(O\big (log(log k)\big )\). [4] builds on similar techniques but considers construction of secret key MIFE from a different view-point (i.e. building iO from functional encryption).

The existence of indistinguishability obfuscation is still a topic of active research. On one hand there has been recent works such as [31] which break many of the existing IO candidates using [20]. However, there have been new/modified constructions which provably resist these attacks under a strengthened model of security [23].

There has also been progress on constructing universal constructions and obfuscation combiners [3, 19]. An almost updated list of candidates along with their status can be found here [3]. Since, Multi-Input Functional Encryption implies indistinguishability obfuscation (as shown in [25]) assuming IO is necessary. Finally, we note that the source of trouble in achieving differing-inputs obfuscation is the auxiliary input provided to the distinguisher. Another alternative to using differing-inputs obfuscation is public-coin diO [28], where this auxiliary input is simply a uniform random string as done in [5] (they however achieve selective security). There are no known implausibility results for public-coin \(di\mathcal {O}\), and it is interesting to give an alternative construction of fully-secure MIFE based on it. Our assumption seems incomparable, as we only need \(i\mathcal {O}\) but also sub-exponential security.

1.4 Organisation

The rest of this paper is organized as follows: In Sect. 2, we recall some definitions and primitives used in the rest of the paper. In Sect. 3 we formally define MIFE and present our security model. Finally in Sect. 4, we present our construction and a security proof.

2 Preliminaries

In this section we recall various concepts on which the paper is built upon. We assume the familiarity of a reader with concepts such as public key encryption, one way functions and omit formal description in the paper. For the rest of the paper, we denote by \(\mathbb {N}\) the set of natural numbers \(\lbrace 1,2,3,..\rbrace \). Sub-exponential indistinguishability obfuscation and sub-exponentially secure puncturable pseudo-random functions have been used a lot recently such as in the works of [9, 15, 30]. For completeness, we present these notions below:

2.1 Indistinguisability Obfuscation

The following definition has been adapted from [21]:

Definition 1

A uniform PPT machine \(i\mathcal {O}\) is an indistinguishability obfuscator for a class of circuits \(\lbrace \mathcal {C}_{n}\rbrace _{n\in \mathbb {N}}\) if the following properties are satisfied.

Correctness: For every \(k \in \mathbb {N}\), for all \(\lbrace \mathcal {C}_{k}\rbrace _{k\in \mathbb {N}}\), we have

$$Pr[C' \leftarrow i\mathcal {O}(1^{k},C) : \forall x, C'(x) = C(x)] =1$$

Security: For any pair of functionally equivalent equi-sized circuits \(C_0,C_1 \in \mathcal {C}_{k}\) we have that: For every non uniform PPT adversary \(\mathcal {A}\) there exists a negligible function \(\epsilon \) such that for all \(k \in \mathbb {N}\),

$$\mid Pr[ \mathcal {A}(1^n, i\mathcal {O}(1^{k}, C_{0}), C_0,C_1,z) = 1] - Pr[ \mathcal {A}(1^k, i\mathcal {O}(1^{k}, C_{1}), C_0,C_1,z) = 1] \mid \le \epsilon (k)$$

We additionally say that \(i\mathcal {O}\) is sub-exponentially secure if there exists some constant \(\alpha >0\) such that for every non uniform PPT \(\mathcal {A}\) the above indistinguishability gap is bounded by \(\epsilon (k) = O(2^{-k^{\alpha }})\).

Definition 2 (Indistinguishability obfuscation for P/poly)

\(i\mathcal {O}\) is a secure indistinguishability obfuscator for P/Poly, if it is an indistinguishability obfuscator for the family of circuits \(\lbrace \mathcal {C}_{k} \rbrace _{k \in \mathbb {N}}\) where \(\mathcal {C}_k\) is the set of all circuits of size k.

2.2 Puncturable Psuedorandom Functions

A PRF \(F: \mathcal {K}_{k\in \mathbb {N}} \times \mathcal {X} \rightarrow \mathcal {Y}_{k \in \mathbb {N}}\) is a puncturable pseudorandom function if there is an additional key space \(\mathcal {K}_p\) and three polynomial time algorithms \((F.\mathsf {setup},F.\mathsf {eval}\), \(F.\mathsf {puncture})\) as follows:

  • \(F.\mathsf {setup}(1^k)\) a randomized algorithm that takes the security parameter k as input and outputs a description of the key space \(\mathcal {K}\), the punctured key space \(\mathcal {K}_p\) and the PRF F.

  • \(F.\mathsf {puncture}(K,x)\) is a randomized algorithm that takes as input a PRF key \(K \in \mathcal {K}\) and \(x \in \mathcal {X}\) , and outputs a key \(K\lbrace x \rbrace \in \mathcal {K}_p\).

  • \(F.\mathsf {Eval}(K,x')\) is a deterministic algorithm that takes as input a punctured key \(K\lbrace x \rbrace \in \mathcal {K}_p\) and \(x'\in \mathcal {X}\). Let \(K\in \mathcal {K}\), \(x\in \mathcal {X}\) and \(K \lbrace x \rbrace \leftarrow F.\mathsf {puncture}(K,x)\).

The primitive satisfies the following properties:

  1. 1.

    Functionality is preserved under puncturing: For every \(x^{*}\in \mathcal {X}\),

    $$Pr[F.\mathsf {eval}(K \lbrace x^{*} \rbrace ,x)=F(K,x)]=1$$

    here probability is taken over randomness in sampling K and puncturing it.

  2. 2.

    Psuedo-randomness at punctured point: For any poly size distinguisher D, there exists a negligible function \(\mu (\cdot )\), such that for all \(k \in \mathbb {N}\) and \(x^{*}\in \mathcal {X}\),

    $$\mid Pr[D(x^{*},K\lbrace x^{*}\rbrace ,F(K,x^{*}) )=1 ]-Pr[D(x^{*},K\lbrace x^{*}\rbrace ,u )=1 ] \mid \le \mu (k)$$

    where \(K\leftarrow F.\mathsf {Setup(1^k)}\), \(K\lbrace x^{*} \rbrace \leftarrow F.\mathsf {puncture}(K,x^{*})\) and \(u\xleftarrow {\$} \mathcal {Y}_k\)

We say that the primitive is sub-exponentially secure if \(\mu \) is bounded by \(O(2^{-k^{c_{PRF}}})\), for some constant \(0<c_{PRF}<1\). We also abuse the notation slightly and use \(F(K,\cdot )\) and \(F.\mathsf {Eval}(K,\cdot )\) to mean one and same thing irrespective of whether key is punctured or not.

2.3 Injective One-Way Function

A one-way function with security \((s,\epsilon )\) is an efficiently evaluable function \(P:\{ 0,1\}^{*}\rightarrow \{ 0,1\}^{*} \) and \(Pr_{x \xleftarrow {\$}{\{ 0,1\}}^n}[P(A(P(x)))=P(x)]\) < \(\epsilon (n)\) for all circuits A of size bounded by s(n). It is called an injective one-way function if it is injective in the domain \(\{ 0,1\}^{n}\) for all sufficiently large n.

In this work we require that there exists Footnote 5 \((s,\epsilon )\) injective one-way function with \(s(n)=2^{n^{c_{owp1}}}\) and \(\epsilon =2^{-n^{c_{owp2}}}\) for some constants \(0<c_{owp1}, c_{owp2}<1\). This assumption is well studied, [27, 35] have used \((2^{cn},1/2^{cn})\) secure one-way functions and permutations for some constant c.

This is a reasonable assumption due to following result from [24].

Lemma 1

Fix \(s(n)=2^{n/5}\). For all sufficiently large n, a random permutation \(\pi \) is \((s(n),1/2^{n/5})\) secure with probability at least \(1-2^{-2^{n/2}}\).

Such assumptions have been made and discussed in works of [27, 34, 35]. In particular, we require the following assumption:

Assumption 1: For any adversary A with running time bounded by \(s(n)=O(2^{n^{c_{owp1}}})\), for any apriori bounded polynomial p(n) there exists an injective one-way function P such that,

for some constant \(0<c_{owp_1},c_{owp_2}<1\). Here, oracle \(\mathcal {O}\) can reveal at most \(p-1\) values out of \(r_1,..,r_p\). Note that this assumption follows from the assumption described above with a loss p in the security gap.

2.4 \((d,\delta )\)-Weak Extractability Obfuscators

The concept of weak extractability obfuscator was first introduced in [12] where they claimed that if there is an adversary that can distinguish between indistinguishability obfuscations of two circuits that differ on polynomial number of inputs with noticable probability, then there is a PPT extractor that extracts a differing input with overwhelming probability. [17] generalised the notion to what they call \((d,\delta )\) weak extractability obfuscator, where they require that if there is any PPT adversary that can distinguish between obfuscations of two circuits (that differ on at most d inputs) with atleast \(\epsilon >\delta \) probability, then there is an explicit extractor that extracts a differing input with overwhelming probability and runs in time \(poly(1/\epsilon ,d,k)\) time. Such a primitive can be constructed from a sub-exponentially secure indistinguishability obfuscation. \((1,2^{-k})\) weak extractability obfuscation will be crucially used in our construction for our MIFE scheme. We believe that in various applications of differing inputs obfuscation, it may suffice to use this primitive along with other sub-exponentially secure primitives.

Definition 3

A uniform transformation \(we\mathcal {O}\) is a \((d,\delta )\) weak extractability obfuscator for a class of circuits \(\mathcal {C}= \lbrace \mathcal {C}_{k} \rbrace \) if the following holds. For every PPT adversary \(\mathcal {A}\) running in time \(t_{\mathcal {A}}\) and \(1\ge \epsilon (k) > \delta \), there exists a algorithm E for which the following holds. For all sufficiently large k, and every pair of circuits on n bit inputs, \(C_{0}, C_{1} \in \mathcal {C}_{k}\) differing on at most d(k) inputs, and every auxiliary input z,

$$\mid Pr[ \mathcal {A}(1^k, we\mathcal {O}(1^{k}, C_{0}) , C_0,C_1,z) = 1] - Pr[ \mathcal {A}(1^k, we\mathcal {O}(1^{k}, C_{1}), C_0,C_1,z) = 1] \mid \ge \epsilon $$
$$\Rightarrow Pr[x \leftarrow E(1^k,C_0, C_1, z) : C_0(x) \ne C_1(x)\ge 1-\mathsf {negl}(k)$$

and the expected runtime of E is \(O(p_{E}(1/\epsilon ,d,t_{\mathcal {A}},n,k))\) for some fixed polynomial \(p_{E}\). In addition, we also require the obfuscator to satisfy correctness.

Correctness: For every \(n \in \mathbb {N}\), for all \(\lbrace \mathcal {C}_{n}\rbrace _{n\in \mathbb {N}}\), we have

$$Pr[C' \leftarrow we\mathcal {O}(1^{n},C) : \forall x, C'(x) = C(x)] =1$$

We now construct a \((1,2^{-k})\) input weak extractability obfuscator from sub-exponentially secure indistinguishability obfuscation. Following algorithm describes the obfuscation procedure.

\(we\mathcal {O}(1^k,C):\) The procedure outputs \(C'\leftarrow i\mathcal {O}(1^{k^{1/\alpha }},C)\). Here, \(\alpha >0\) is a constant chosen such that any polynomial time adversary against indistinguishability obfuscation has security gap upper bounded by \(2^{-k}/4\).

The proof of the following theorem is proven in [17].

Theorem 1

Assuming sub-exponentially secure indistinguishability obfuscation, there exists \((1,\delta )\) weak obfuscator for P/poly for any \(\delta >2^{-k}\), where k is the size of the circuit.

In general, assuming sub-exponential security one can construct \((d,\delta )\) extractability obfuscator for any \(\delta >2^{-k}\). Our construction is as follows:

\(we\mathcal {O}(C):\) Let \(\alpha \) be the security constant such that \(i\mathcal {O}\) with parameter \(1^{k^{1/\alpha }}\) has security gap upper bounded by \(O(2^{-3k})\). This can be found due to sub exponential security of indistinguishability obfuscation. The procedure outputs \(C'\leftarrow i\mathcal {O}(1^{k^{1/\alpha }},C)\).

We cite [12] for the proof of the following theorem.

Theorem 2

([12]). Assuming sub-exponentially secure indistinguishability obfuscation, there exists \((d,\delta )\) weak extractability obfuscator for P/poly for any \(\delta >2^{-k}\).

3 Multi-input Functional Encryption

Let \(\mathcal {X}=\lbrace \mathcal {X}_{k} \rbrace _{k \in \mathbb {N}}\) and \(\mathcal {Y} = \lbrace \mathcal {Y}_{k}\rbrace _{k\in \mathbb {N}}\) denote ensembles where each \(\mathcal {X}_{k}\) and \(\mathcal {Y}_{k}\) is a finite set. Let \(\mathcal {F}=\lbrace \mathcal {F}_{k} \rbrace _{k\in \mathbb {N}}\) denote an ensemble where each \(\mathcal {F}_{k}\) is a finite collection of n-ary functions. Each \(\mathsf {f}\in \mathcal {F}_{k}\) takes as input n strings \(\mathsf {x_{1},..,x_{n}}\) where each \(\mathsf {x_{i}}\in \mathcal {X}_{k}\) and outputs \(\mathsf {f(x_{1},..,x_{n})}\in \mathcal {Y}_{k}\). We now describe the algorithms.

  • \(\mathsf {MIFE.Setup(1^{\kappa },n)}\): is a PPT algorithm that takes as input the security parameter \(\mathsf {\kappa }\) and the function arity \(\mathsf {n}\). It outputs \(\mathsf {n}\) encryption keys \(\mathsf {EK_{1},..,EK_{n}}\) and a master secret key \(\mathsf {MSK}\).

  • \(\mathsf {MIFE.Enc(EK, m)}\): is a PPT algorithm that takes as input an encryption key \(\mathsf {EK_i \in (EK_{1},..,EK_{n})}\) and an input message \(m \in \mathcal {X}_{k}\) and outputs a ciphertext \(\mathsf {CT_i}\) which denotes that the encrypted plaintext constitutes an \(\mathsf {i}^{th}\) input to a function \(\mathsf {f}\).

  • \(\mathsf {MIFE.Keygen(MSK, f)}\): is a PPT algorithm that takes as input the master secret key \(\mathsf {MSK}\) and a \(\mathsf {n}-\)ary function \(\mathsf {f}\in \mathcal {F}_{k}\) and outputs a corresponding decryption key \(\mathsf {SK_{f}}\).

  • \(\mathsf {MIFE.Dec(SK_{f} , CT_{1},..,CT_{n})}:\) is a deterministic algorithm that takes as input a decryption key \(\mathsf {SK_{f}}\) and \(\mathsf {n}\) ciphertexts \(\mathsf {CT_{i},..,CT_{n}}\) and outputs a string \(\mathsf {y}\in \mathcal {Y}_{k}\).

The scheme is said to satisfy correctness if for honestly generated encryption and function key and any tuple of honestly generated ciphertexts, decryption of the cipher-texts with function key for f outputs the joint function value of messages encrypted inside the ciphertexts with overwhelming probability.

Definition 4

Let \(\mathsf {\lbrace f \rbrace }\) be any set of functions \(f \in \mathcal {F}_{\kappa }\). Let \([n] = \lbrace 1,..,n \rbrace \) and \(I \subseteq [n]\). Let \(\varvec{X^{0}}\) and \(\varvec{X^{1}}\) be a pair of input vectors, where \(\varvec{X^{b}}=\lbrace x^{b}_{1,j},..,x^{b}_{n,j}\rbrace _{j=1}^{q}\). We define \(\mathcal {F}\) and \((X^{0}, X^{1})\) to be I-compatible if they satisfy the following property: For every \(f \in \lbrace f \rbrace \), every \(I^{'} = \lbrace i_{1},..,i_{t} \rbrace \subseteq I\), every \(j_{1},..,j_{n-t}\in [q]\) and every \(x^{'}_{i_{1}},..,x^{'}_{i_{t}}\in \mathcal {X_{\kappa }}\),

$$f(<x^{0}_{i_{1},j_{1}},..,x^{0}_{i_{n-t},j_{n-t}},x^{'}_{i_{1}},..,x^{'}_{i_{t}}>)=f(<x^{1}_{i_{1},j_{1}},..,x^{1}_{i_{n-t},j_{n-t}},x^{'}_{i_{1}},..,x^{'}_{i_{t}}>)$$

where \(<y_{i_{1}},...,y_{i_{n}}>\) denotes a permutation of the values \(y_{i_{1}},..,y_{i_{n}}\) such that the value \(y_{i_{j}}\) is mapped to the \(l^{th}\) location if \(y_{i_{j}}\) is the \(l^{th}\) input (out of n inputs) to f.

IND-Secure MIFE: Security definition in [25] was parameterized by two parameters (tq) where t denotes the number of encryption keys known to the adversary, and q denotes the number of challenge messages per encryption key. Since, our scheme can handle any unbounded polynomial q and any \(t\le n\), we present a definition independent of these parameters.

Definition 5 (Indistinguishability based security)

We say that a multi-input functional encryption scheme \(\mathsf {MIFE}\) for for n ary functions \(\mathcal {F}\) is fully IND-secure if for every PPT adversary \(\mathcal {A}\), the advantage of \(\mathcal {A}\) defined as

$$\mathsf {Adv}_{\mathcal {A}}^{\mathsf {MIFE,IND}}(1^{\kappa })=\vert Pr [\mathsf {IND}^{\mathsf {MIFE}}_{\mathcal {A}}]-1/2 \vert $$

is \(negl(\kappa )\), where:

Valid adversaries: In the above experiment, \(\mathcal {O}(\mathbf {EK},\cdot )\) is an oracle that takes an index i and outputs \(EK_i\). Let I be the set of queries to this oracle. \(\mathcal {E}(\mathbf {EK},b,\cdot ) \) on a query \((x^{0}_{1,j},..,x^{0}_{n,j}),(x^{1}_{1,j},..,x^{1}_{n,j})\) (where j denotes the query number) outputs \(\mathsf {CT}_{i,j}\leftarrow \mathsf {MIFE.Enc}(\mathsf {EK}_{i},x^{b}_{i,j}) \text { }\forall i \in [n]\). If q is the total number of queries to this oracle then let \(\varvec{X^{l}}=\lbrace x^{l}_{1,j},..,x^{l}_{n,j}\rbrace _{j=1}^{q}\) and \(l\in \{ 0,1\}\). Also, let \(\lbrace f \rbrace \) denote the entire set of function key queries made by \(\mathcal {A}\). Then, the challenge message vectors \(\varvec{X^{0}}\) and \(\varvec{X^{1}}\) chosen by \(\mathcal {A}\) must be \(I -\)compatible with \(\lbrace f \rbrace \). The scheme is said to be secure if for any valid adversary \(\mathcal {A}\) the advantage in the game described above is negligible (Fig. 1).

Fig. 1.
figure 1

Security game

4 Our MIFE Construction

Notation: Let k denote the security parameter and \(n=n(k)\) denote the bound on arity of the function for which the keys are issued. By \(\mathsf {PRF=(PRF.Setup,PRF.}\mathsf {Puncture, PRF.Eval)}\) denote a sub-exponentially secure puncturable \(\mathsf {PRF}\) with security constant \(c_{PRF}\) and \(\mathsf {PKE}\) denote a public key encryption scheme. Let P be any one-one function (in the security proof we instantiate with a sub-exponentially secure injective one-way function with security constants \(c_{owp1}\) and \(c_{owp2}\)). Finally, let \(\mathcal {O}\) denote a \((1,2^{-3nl-k})\) weak extractability obfuscator (here l is the length of the cipher-text of \(\mathsf {PKE}\)). In particular, for any two equivalent circuits security gap of the obfuscation is bounded by \(2^{-3nl-k}\) (any algorithm that distinguishes obfuscations of two circuits with more than this gap will yield an algorithm that extracts a differing point).

\(\mathsf {MIFE.Setup(1^k, n):}\) Sample \(K_i\leftarrow \mathsf {PRF.Setup}(1^\lambda )\) and \(\lbrace (PK^b_{i},SK^b_{i}) \rbrace _{b\in \{ 0,1\}} \leftarrow \mathsf {PKE.Setup}(1^k)\). Let \(PP_i\) be the circuit as in Fig. 2. \(EK_i\) is declared as the set \(EK_i= \lbrace PK^0_{i},PK^1_{i},\tilde{PP_i}=\mathcal {O}(PP_i),P\rbrace \) and \(MSK=\lbrace SK^0_{i},SK^1_{i},K_{i},P \rbrace _{i\in [n]}\). Here injective function P takes as input elements from the co-domain the PRF. \(\lambda \) is set greater than \((3nl+k)^{1/c_{PRF}} \) and so that the length of output of the PRF is at least \(max\lbrace (5nl+2k)^{1/c_{owp1}},(3nl+k)^{1/c_{owp2}} \rbrace \) long.

Fig. 2.
figure 2

Program encrypt

\(\mathsf {MIFE.Enc}(EK_i,m):\) To encrypt a message m, encryptor does the following:

  • Compute \(c^0_{i}=\mathsf {PKE.Enc}(PK^0_{i},m;r^0)\) and \(c^1_{i}=\mathsf {PKE.Enc}(PK^1_{i},m;r^{1})\).

  • Evaluate \(\pi _i\leftarrow \tilde{PP_i}(c^0_{i},c^1_{i},m,r^0,r^1)\).

Output \(CT_i=(c^0_{i},c^1_{i},\pi _i)\).

\(\mathsf {MIFE.KeyGen}(MSK,f):\) Let \(G^0_{f}\) be the circuit described below. Key for f is output as \(K_f \leftarrow \mathcal {O}(G^0_f)\).

Fig. 3.
figure 3

Program \(G^0_f\)

\(\mathsf {MIFE.Decrypt}(K_f, \lbrace c^0_{i},c^1_{i},\pi _i\rbrace _{i\in [n]}):\) Output \(K_f(c^0_{1},c^1_{1},\pi _1,..,c^0_{n},c^1_{n},\pi _{n})\).

Remark

  1. 1.

    We also assume that the circuits are padded appropriately before they are obfuscated.

  2. 2.

    Note that in the scheme, circuit for the key for a function f, \(G^0_f\) is instantiated with any one-one function (denoted by P). In the proofs we replace it with a sub-exponentially secure injective one-way function. We see that the input output behaviour of \(G^0_f\) do not change when it is instantiated with any one-one function, hence we can switch to a hybrid when it is instantiated by sub-exponentially secure injective one way function and due to the security of obfuscation these two hybrids are close.

4.1 Proof Overview

The starting point of our construction is the fully-secure construction of MIFE based on \(di\mathcal {O}\) due to Goldwasser et al. [25] mentioned above. In their scheme, the encryption key for an index \(i \in [n]\) (where n is the function arity) is a pair of public keys \((pk^0_i, pk^1_i)\) for an underlying PKE scheme, and a ciphertext for index i consists of encryptions of the plaintext under \(pk^0_i,pk^1_i\) respectively, along with a simulation-sound non-interactive zero knowledge proof that the two ciphertexts are well-formed (i.e., both encrypting the same underlying message). The secret key for a function f is an obfuscation of a program that takes as input n ciphertext pairs with proofs \((c^0_1,c^1_1,\pi _1),\ldots ,(c^0_n,c^1_n,\pi _n)\), and, if the proofs verify, decrypts the first ciphertext from each pair using the corresponding secret key, and finally outputs f applied to the resulting plaintexts. Note that it is important for the security proof to assume \(di\mathcal {O}\), since one needs to argue when the function keys are switched to decrypting the second ciphertext in each pair instead, an adversary who detects the change can be used to extract a false proof.

We develop modifications to this scheme so that we can instead leverage a result of [12] that any indistinguishability obfuscator is in fact a differing-inputs obfuscator on circuits which differ on polynomially many points. In fact, we we will only need to use this result for circuits which differ on a single point. But, we will need to require the extractor to work given an adversary with even exponentially-small distinguishing gap on the obfuscations of two such circuits, due to the exponential number of hybrids in our security proof. We make use of sub-exponentially secure obfuscation to achieve this.

Specifically, we make the proofs of well-formedness described above unique for every ciphertext pair, so that there is only one differing input point in the corresponding hybrids in our security proof. To achieve this, we design novel “special-purpose” proofs built from \(i\mathcal {O}\) and punctured pseudorandom functions (PRFs) [11, 13, 29],Footnote 6 which works as follows. We include in the public parameters an obfuscated program that takes as input two cipher-texts and a witness that they are well-formed (i.e., the message and randomness used for both the cipher-texts), and, if this check passes, outputs a (puncturable) PRF evaluation on those ciphertexts. Additionally, the secret key for a function f will now be an obfuscation of a program which additionally has this PRF key hardwired keys and verifies the “proofs” of well-formedness by checking that PRF evaluations are correct. Interestingly, in the security proof, we will switch to doing this check via an injective one-way function applied to the PRF values (i.e., the PRF values themselves are not compared, but rather the outputs of injective one-way function applied to them). This is so that extracting a differing input at this step in the security proof will correspond to inverting a injective one-way function; otherwise, the correct PRF evaluation would still be hard-coded in the obfuscated function key and we do not know how to argue security.

We now sketch the sequence of hybrids in our security proof. The proof starts from a hybrid where each challenge ciphertext encrypts \(m^0_i\) for \(i \in [n]\). Then we switch to a hybrid where each \(c^1_i\) is an encryption of \(m^1_i\) instead. These two hybrids are indistinguishable due to security of the PKE scheme. Let \(\ell \) denote the length of a ciphertext. For each index \(i \in [n]\) we define hybrids indexed by x, for all \(x \in [2^{2n\ell }],\) in which function key \(\mathsf {SK}_f\) decrypts the first ciphertext in the pair using \(\mathsf {SK}^0_i\) when \((c^0_1,c^1_1,..,c^0_n,c^1_n) < x\) and decrypts the second ciphertext in the pair using \(\mathsf {SK}^1_i\) otherwise. Parse \(x=(x^0_{1},x^1_1,..,x^0_{n},x^1_n)\). Hybrids indexed by x and \(x + 1\) can be proven indistinguishable as follows: We first switch to sub-hybrids that puncture the PRF key at \( \lbrace x^0_i,x^1_i \rbrace \), changes a function key \(\mathsf {SK}_f\) to check correctness of an PRF value by applying an injective one-way function as described above, and hard-coded the output of the injective one-way function at the punctured point. Now if the two hybrids differ at an input of the form \((x^0_1,x^1_1,\alpha _1,..,x^0_n,x^1_n,\alpha _n)\) where \(\alpha _i\) is some fixed value (a PRF evaluation of \((x^0_i,x^1_i)\)), extracting the differing input can be used to invert the injective one-way function on random input (namely the \(\alpha _i\)). As in [12], this inverter runs in time inversely proportional to the distinguishing gap between the two consecutive hybrids (which is sub-exponentially small). Hence, we require a sub-exponential secure injective one-way function to argue security.

Finally, we note that exponentially many hybrids are indexed by all possible ciphertext vectors that could be input to decryption (i.e., vectors of length the arity of the functionality) and not all possible challenge ciphertext vectors. This allows us to handle any unbounded (polynomial) number of ciphertexts for every index.

4.2 Proof of Security

Theorem 3

Assuming an existence of a sub-exponentially secure indistinguishability obfuscator, injective one-way function and a polynomially secure public-key encryption scheme there exists a fully IND secure multi-input functional encryption scheme for any polynomially apriori bounded arity n.

Proof. We start by giving a lemma that will be crucial to the proof.

Lemma 2

Let X and Y denote two (possibly correlated) random variables from distribution \(\mathcal {X}\) and \(\mathcal {Y}\), with support \(|\mathcal {X}|\) and \(|\mathcal {Y}|\), and U(XY) denote an event that depends on XY. We say that \(U(X, Y) = 1\) if the event occurs, and \(U(X, Y) = 0\) otherwise. Suppose \(\Pr _{(X,Y) \sim \mathcal {X},\mathcal {Y} }[U(X, Y) = 1] = p\). We say that a transcript \(\mathbb {X}\) falls in the set ‘good’ if \(Pr_{Y \sim \mathcal {Y}}[U(X,Y|X = \mathbb {X}) = 1] \ge p/2\). Then, \(Pr_{X \sim \mathcal {X}}[X \in \textsf {good}] \ge p/2\).

Proof. We prove the lemma by contradiction. Suppose \(Pr_{X \sim \mathcal {X}}[X \in \mathsf {good}] = c < \frac{p}{2}\). Then,

$$\begin{aligned} Pr_{(X,Y) \sim (\mathcal {X}, \mathcal {Y}) }[U(X,Y)=1]&= Pr_{(X,Y) \sim (\mathcal {X}, \mathcal {Y})}[U(X,Y)=1|X\in \mathsf {good}]\cdot \mathop {\Pr }\limits _{X \sim \mathcal {X}}[X\in \mathsf {good}] \\&+ Pr_{(X,Y) \sim (\mathcal {X}, \mathcal {Y})}[U(X,Y)=1|X\not \in \mathsf {good}]\cdot Pr_{X \sim \mathcal {X}}[X\not \in \mathsf {good}] \end{aligned}$$

By definition of the set good, \(Pr_{(X,Y) \sim (\mathcal {X}, \mathcal {Y})}[U(X,Y)=1|X\not \in \mathsf {good}]<\frac{p}{2}\). Then, \(p = \Pr [U(X,Y)=1] < 1 \cdot c + (1-c) \cdot p/2\). Then, if \(c < \frac{p}{2}\), we will have that \(p < \frac{p}{2} + \frac{p}{2}\), which is a contradiction. This proves our lemma.

We proceed listing hybrids where the first hybrid corresponds to the hybrid where the challenger encrypts message \(m^0_{i,j}\) for all \(i\in [n]\) and the last hybrid corresponds to the hybrid where the challenger encrypts \(m^1_{i,j}\). We then prove that each consecutive hybrid is indistinguishable from each other. Then, we sum up all the advantages between the hybrids and argue that the sum is negligible.

\(\mathsf {H}_0\)

  1. 1.

    Challenger does setup to compute encryption keys \(EK_i \forall i \in [n]\) and MSK as described in the algorithm.

  2. 2.

    \(\mathcal {A}\) may query for encryption keys \(EK_i\) for some \(i\in [n]\), function keys for function f and ciphertext queries in an interleaved fashion.

  3. 3.

    If it asks for an encryption key for index i, it is given \(EK_i\).

  4. 4.

    When \(\mathcal {A}\) queries keys for n ary function \(f_j\) and challenger computes keys honestly using MSK.

  5. 5.

    \(\mathcal {A}\) may also ask encryptions of message vectors \(M^h=\lbrace (m^h_{1,j},..,m^h_{n,j}) \rbrace \) where \(h\in \{ 0,1\}\), where j denotes the encryption query number. The message vectors has to satisfy the constraint as given in the security definition.

  6. 6.

    For all queries j, challenger encrypts \(CT_{i,j}\forall i \in [n]\) as follows: \(c^0_{i,j}= \mathsf {PKE.Enc}(PK^0_i,m^0_{i,j})\) and \(c^1_{i,j}=\mathsf {PKE.Enc}(PK^1_i,m^0_{i,j})\) and \(\pi _{i,j} \leftarrow \mathsf {PRF.Eval}(K_i,c^0_{i,j}, c^1_{i,j})\). Then the challenger outputs \(CT_{i,j}=(c^0_{i,j},c^1_{i,j},\pi _{i,j})\).

  7. 7.

    \(\mathcal {A}\) can ask for function keys for functions \(f_j\), encryption keys \(EK_i\)’s and cipher-texts as long as they satisfy the constraint given in the security definition.

  8. 8.

    \(\mathcal {A}\) now outputs a guess \(b'\in \{ 0,1\}\).

\(\mathsf {H_1}\!\!:\) Let q denote the number of cipher-text queries. This hybrid is same as the previous one except that for all indices \(i\in [n], j\in [q]\) challenge cipher-text cipher-text component \(c^1_{i,j}\) is set as \(c^1_{i,j}=\mathsf {PKE.Enc}(PK^1_{i},m^1_{i,j})\).

\(\mathsf {H}_{x\in [2,2^{2ln}+2 ]}\!\!:\) This hybrid is same as the previous one except key for every function query f is generated as an obfuscation of program (Fig. 4) by hard-wiring x (along with \(SK^0_i,SK^1_i,K_i,P\)).

Fig. 4.
figure 4

Program \(G_{f,x}\)

\(\mathsf {H}_{2^{2ln}+3 }\!\!:\) This hybrid is same as the previous one except that function keys for any function f is generated by obfuscating program (Fig. 5).

Fig. 5.
figure 5

Program \(G^1_{f}\)

\(\mathsf {H}_{2^{2ln}+4}\!\!:\) Let q denote the number of cipher-text queries made by the adversary. This hybrid is same as the previous one except that for all indices \(i\in [n],j\in [q]\), challenge cipher-text component \(c^0_{i,j}\) is generated as \(c^0_{i,j}=\mathsf {PKE.Enc}(PK^0_{i},m^1_{i,j})\).

\(\mathsf {H}_{2^{2ln}+4+x \mid x \in [2^{2ln}+1]}\!\!:\) This hybrid is same as the previous one except key for a function f is generated by obfuscating program (Fig. 4) by hard-wiring \(2^{2ln}+3-x\) (along with \(SK^0_i,SK^1_i,K_i,P\)).

\(\mathsf {H}_{2.2^{2ln}+6}\!\!:\) This hybrid corresponds to the real security game when \(b=1\).

We now argue indistinguishability by describing following lemmas.

Lemma 3

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_0)=1] - Pr[D(\mathsf {H}_1)=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. This lemma follows from the security of the encryption scheme \(\mathsf {PKE}\). In these hybrids, all function keys only depend on one secret key \(SK^0_{i}\) for all \(i\in [n]\) and \(SK^1_i\) never appears in the hybrids. If there is a distinguisher D that distinguishes between the hybrids then there exists an algorithm \(\mathcal {A}\) that breaks the security of the encryption scheme with the same advantage. \(\mathcal {A}\) gets set of public keys \(PK_1,..,PK_n\) from the encryption scheme challenger and samples public keys \((PK^0_i,SK^0_i) \forall i \in [n]\) himself and sets \(PK^1_i=PK_i \forall i \in [n]\). It also samples PRF keys \(K_i\forall i \in [n]\). Using these keys, it generates encryption keys \(EK_i\forall i \in [n]\). Then, it invokes D and answers queries for encryption keys \(EK_i\)’s and function keys. \(\mathcal {A}\) generates function keys using only as obfuscation of \(G^0_f\). Finally, D declares \(M^b=\lbrace (m^b_{1,j},..,m^b_{n,j}) \rbrace _{j \in [q]}\). \(\mathcal {A}\) sends \((M^0,M^1)\) to the encryption challenger and gets \(c_{i,j} \forall i \in [n],j\in [q]\) from the challenger. \(\mathcal {A}\) computes \(c^0_{i,j} \leftarrow \mathsf {PKE.Enc(PK^0_i,m^0_{i,j})}\). Then evaluates \(\pi _{i,j}\leftarrow \mathsf {PRF.Eval}(K_i,c^0_{i,j},c_{i,j})\). Then it sets, \(CT_{i,j}=(c^0_{i,j},c_{i,j},\pi _{i,j})\) and sends it to D. After that D may query keys for functions and encryption keys and the response is given as before. D now submits a guess \(b'\) which is also output by \(\mathcal {A}\) as its guess for the encryption challenge. If \(c_{i,j}\) is an encryption of \(m^0_{i,j}\) then \(D'\)s view is identical to the view in \(\mathsf {H}_1\) otherwise its view is identical to the view in \(\mathsf {H}_2\). Hence, distinguishing advantage of D in distinguishing hybrids is less than the advantage of \(\mathcal {A}\) in breaking the security of the encryption scheme.

Lemma 4

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_1)=1] - Pr[D(\mathsf {H}_2)=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. For simplicity, we consider the case when there is only single function key query f. General case can be argued by introducing v many intermediate hybrids where v is the number of keys issued to the adversary. Indistinguishability of these hybrids follows from the fact that circuit \(G^0_f\) and \(G_{f,x=2}\) are functionally equivalent. Hence, due to the security of indistinguishability obfuscation property of the weak extractability obfuscator the lemma holds. For completeness, we describe the reduction. Namely, we construct an adversary \(\mathcal {A}\) that uses D to break the security of weak extractability obfuscator. \(\mathcal {A}\) invokes D and does setup (by sampling PKE encryption key pairs and PRF keys for all indices) and answers cipher-text queries as in the previous hybrid \(\mathsf {H}_1\). On query f from D, it sends \(G^0_f\) and \(G_{f,x}\) to the obfuscation challenger. It receives \(K_f\) and sends it to \(\mathcal {A}\). \(\mathcal {A}\) sends it to D. It replies to the encryption key queries to D using the sampled PKE keys and PRF keys. Then it outputs whatever D outputs. Note that view of D is identical to the view in \(\mathsf {H}_1\) (if \(K_f\) is an obfuscation of \(G^0_f\)) or \(\mathsf {H}_2\) (if \(K_f\) is an obfuscation of \(G_{f,x=2}\)). Hence, advantage of \(\mathcal {A}\) is at least the advantage of D in distinguishing hybrids. Due to security of obfuscation claim holds.

Lemma 5

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{2^{2ln}+2 })=1] - Pr[D(\mathsf {H}_{2^{2ln}+3})=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. This follows from the indistinguishability obfuscator \(\mathcal {O}\). For any function f, \(G^1_f\) is functionally equivalent to \(G_{f,x=2^{2ln}+2}\). Proof of the lemma is similar to the proof of Lemma 4.

Lemma 6

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{2^{2ln}+3 })=1] - Pr[D(\mathsf {H}_{2^{2ln}+4})=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. This follows from the security of encryption scheme \(\mathsf {PKE}\). Note that in both the hybrids \(SK^0_i\) is not used anywhere. Proof is similar to the proof of Lemma 3.

Lemma 7

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{2^{2ln}+4 })=1] - Pr[D(\mathsf {H}_{2^{2ln}+5})=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. This follows from the security of indistinguishability obfuscator \(\mathcal {O}\). Proof is similar to the proof of Lemma 4.

Lemma 8

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{2.2^{2ln}+5 })=1] - Pr[D(\mathsf {H}_{2.2^{2ln}+6})=1] \mid \,<\,\mathsf {negl}(k) \).

Proof. This follows from the security of indistinguishability obfuscator \(\mathcal {O}\). Proof is similar to the proof of Lemma 4.

Lemma 9

For any PPT distinguisher D and \(x\in [2,2^{2ln}+1]\), \(\mid Pr[D(\mathsf {H}_{x})=1] - Pr[D(\mathsf {H}_{x+1})=1] \mid \,<\,O(v\cdot 2^{-2ln-k}) \) for some polynomial v.

Proof. We now list following sub hybrids and argue indistinguishability between these hybrids.

\(\mathsf {H}_{x,1}\)

  1. 1.

    Challenger samples key pairs \((PK^0_i,SK^0_i),(PK^1_i,SK^1_i)\) for each \(i\in [n]\).

  2. 2.

    Parses \(x-2=(x^0_1,x^1_1,..,x^0_n,x^1_n)\) and computes \((a^0_i,a^1_i)\leftarrow (\mathsf {PKE.Dec}(SK^0_i,x^0_i )\),\(\mathsf {PKE.Dec}(SK^1_i,x^1_i ))\).

  3. 3.

    Samples puncturable PRF’s keys \(K_i \forall i \in [n]\).

  4. 4.

    Denote by set \(Z \subset [n]\) such that \(i\in Z\) if \(a^0_i\ne a^1_i\). Computes \(\alpha _i \leftarrow \mathsf {PRF.Eval}(K_i,x^0_i,x^1_i)\) and derives punctured keys \(K'_i \leftarrow \mathsf {PRF.Puncture}(K_i,x^0_i,x^1_i)\) for all \(i\in [n]\).

  5. 5.

    If \(\mathcal {A}\) queries for encryption keys for any index i, for any i in Z, \(\tilde{PP_i}\) is generated as an obfuscation of circuit in Fig. 2 instantiated with the punctured key \(K'_i\) (\(\alpha _i\) will never be accessed by the circuit \(PP_i\) in this case). For all other indices i, \(\tilde{PP}_i\) is constructed by using the punctured key \(K'_i\) and hard-coding the value \(\alpha _i\) (for input \((x^0_i,x^1_i)\)) as done in Fig. 6. These \(\tilde{PP}_i\) are used to respond to the queries for \(EK_i\).

  6. 6.

    If \(\mathcal {A}\) queries keys for n ary function \(f_j\) and challenger computes keys honestly as in \(\mathsf {H}_x\) using MSK.

  7. 7.

    If \(\mathcal {A}\) releases message vectors \(M^h=\lbrace (m^h_{1,j},..,m^h_{n,j}) \rbrace \) where \(h\in \{ 0,1\}\), challenger encrypts \(CT_{i,j}\forall i \in [n], j\in [q]\) as follows: \(c^0_{i,j}=\mathsf {PKE.Enc}(PK^0_i,m^0_{i,j})\) and \(c^1_{i,j}=\mathsf {PKE.Enc}(PK^1_i,m^1_{i,j})\). If \((c^0_{i,j},c^1_{i,j})=(x^0_i,x^1_i)\) set \(\pi _{i,j}=\alpha _{i}\) otherwise set \(\pi _{i,j} \leftarrow \mathsf {PRF.Eval}(K_i,c^0_{i,j},c^1_{i,j})\). Then the challenger outputs \(CT_{i,j}=(c^0_{i,j},c^1_{i,j},\pi _{i,j})\). Here q denotes the total number of encryption queries.

  8. 8.

    Challenger can ask for function keys for functions \(f_j\) and encryption keys \(EK_i\) as long as they satisfy the constraint with the message vectors.

  9. 9.

    \(\mathcal {A}\) now outputs a guess \(b'\in \{ 0,1\}\).

Fig. 6.
figure 6

Program Encrypt*

\(\mathsf {H}_{x,2}\!\!:\) This hybrid is similar to the previous one except that function key for any function f is generated as an obfuscation of program (Fig. 7) by hard-wiring \((SK^0_i,SK^1_i,K^{'}_i,P,P(\alpha _i),x^0_i,x^1_i)\forall i \in [n]\).

Fig. 7.
figure 7

Program \(G^*_{f,x}\)

\(\mathsf {H}_{x,3}\) This hybrid is similar to the previous hybrid except that for all \(i\in [n]\), \(\alpha _i\) is chosen randomly from the domain of the injective one way function P.

\(\mathsf {H}_{x,4}\!\!:\) This hybrid is similar to the previous hybrid except that the function key is generated as an obfuscation program (Fig. 7) initialised \(x+1\).

\(\mathsf {H}_{x,5}\): This hybrid is the same as the previous one except that \(\alpha _i\forall i \in [n]\) is chosen as actual PRF values at \((x^0_i,x^1_i)\) using the key \(K_i\).

\(\mathsf {H}_{x,6}\): This hybrid is the same as the previous one except that key for the function f, keys are generated as obfuscation of program (Fig. 4) initialised with \(x+1\).

\(\mathsf {H}_{x,7}\): This hybrid is the same as the previous one except for all \(i \in [n]\), \(\tilde{PP}_i\) is generated as an obfuscation of (Fig. 2) initialised with genuine PRF key \(K_i\). This hybrid is identical to the hybrid \(\mathsf {H}_{x+1}\).

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x})=1] - Pr[D(\mathsf {H}_{x,1})=1] \mid \,<\,O(n\cdot 2^{-3nl-k}) \).

Proof. This claim follows from the indistinguishability security of weak extractability obfuscator. We have that circuits for \(i\in Z \), circuit in Fig. 2 initialised with regular PRF key \(K_i\) is functionally equivalent to when it is initialised with punctured key \(K'_i\). This is because for \(i\in Z\), \((x^0_i,x^1_i)\) never satisfies the check and the PRF is never evaluated at this point and also the fact the punctured key outputs correctly at all points except the point at which the PRF is punctured. For \(i\in [n]{\setminus }Z\), program in Fig. 2 initialised with \(K_i\) is functionally equivalent to the program in Fig. 6 initialised with \((K'_i,\alpha _i)\).

From the above observation, we can prove the claim by at most n intermediate hybrids where we switch one by one obfuscation \(\tilde{PP}_i\) to use the punctured key and each intermediate hybrid is indistinguishable due to the security of obfuscation.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,1})=1] - Pr[D(\mathsf {H}_{x,2})=1] \mid \,<\,O(p(k)\cdot 2^{-3nl-k}) \). Here, p(k) is some polynomial.

Proof. This follows from the indistinguishability obfuscation property of the weak extractability obfuscator \(\mathcal {O}\). The proof follows by at most p intermediate hybrids where each queried \(K_f\) is switched to an obfuscation of program (Fig. 4) (with hard-wired values \(SK^0_i,SK^1_i,K_i,x,P\)) to an obfuscation of program (Fig. 7) (with hard-wired values \(SK^0_i,SK^1_i,K'_i,P,P(\alpha _i),x\)). Note that in this hybrids, both these programs are functionally equivalent. This reduction is straight forward and we omit details.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,2})=1] - Pr[D(\mathsf {H}_{x,3})=1] \mid \,<\,O(n\cdot 2^{-2nl-k}) \).

Proof. This claim follows from the property that puncturable PRF’s value is psuedo-random at punctured point given the punctured key (sub-exponential security of the puncturable PRF). This proof goes through by a sequence of at most n hybrids where for each index \(i \in [n]\), \((K'_i,\alpha _i=\mathsf {PRF.Eval}(K_i,x^0_i,x^1_i))\) is replaced with \((K'_i,\alpha _i \leftarrow \mathcal {R})\) for all \(i\in [n]\). This can be done because in both these hybrids, function keys and the encryption keys use only the punctured keys and a the value of the PRF at the punctured point. Here \(\mathcal {R}\) is the co-domain of the PRF, which is equal to the domain of the injective one way function P. Since, PRF is sub exponentially secure with parameter \(c_{PRF}\) (\(c_{PRF}\) be the security constant of the PRF) when PRF is initialised with parameter greater than \((2nl+k)^{1/c_{PRF}}\), distinguishing advantage between each intermediate hybrid is bounded by \(O(2^{-2nl-k})\). The reduction is straight forward and we omit the details.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,3})=1] - Pr[D(\mathsf {H}_{x,4})=1] \mid \,<\,O(p(k).2^{-2nl-k}) \). for some polynomial p(k).

Proof. We prove this claim for a simplified case when only one function key is queried. The general case by considering a sequence of intermediate hybrids where function keys are changed one by one, hence the factor p(k). Assume that there is a PPT algorithm D such that \(\mid Pr[D(\mathsf {H}_{x,3})=1] - Pr[D(\mathsf {H}_{x,4})=1] \mid \,>\,\epsilon > 2^{-2nl-k}\). Note that these hybrids are identical upto the point the adversary asks for a key for a function f. We argue indistinguishability according to following cases.

  1. 1.

    Case 0: Circuit given in Fig. 7 initialised with x is functionally equivalent to circuit Fig. 7 initialised with \(x+1\).

  2. 2.

    Case 1: This is the case in which the two circuits described above are not equivalent.

Let Q denote the random variable and \(Q=0\) if adversary is in case 0, otherwise \(Q=1\). By \(\epsilon _{Q=b}\) denote the value \(\mid Pr[D(\mathsf {H}_{x,3})=1/Q=b] - Pr[D(\mathsf {H}_{x,4})=1/Q=b] \mid \). It is known that \(Pr[Q=0]\epsilon _{Q=0}+Pr[Q=1]\epsilon _{Q=1} >\epsilon \).

Now we analyse both these cases:

\(\mathbf {Pr[Q=0]\epsilon _{Q=0}<2^{-2nl-k}}\): This claim follows due to the indistinguishability security of \((1,2^{-3nl-k})\) weak extractability obfuscator. Consider an adversary D with \(Q=0\) and challenger C, we construct an algorithm \(\mathcal {A}\) that uses D and breaks the indistinguishability obfuscation of the weak extractability obfuscator with the same advantage. \(\mathcal {A}\) works as follows: \(\mathcal {A}\) invokes C that invokes D. C does the setup as in the hybrid and responds to the queries of D. D outputs f. \(\mathcal {A}\) gives \(G^{*}_{f,x}\) and \(G^{*}_{f,x+1}\) to the obfuscation challenger and gets back \(K_f\) in return which is given to D. D’s queries are now answered by C. \(\mathcal {A}\) outputs whatever D outputs. \(\mathcal {A}\) breaks the indistinguishability obfuscation security of the weak extractability obfuscator with advantage at least \(\epsilon _{Q=0}\) as the view of D is identical to \(\mathsf {H}_{x,3}\) if \(G^*_{f,x}\) was obfuscated and it is identical to \(\mathsf {H}_{x,4}\) otherwise.

\(\mathbf {Pr[Q=1]\epsilon _{Q=1}<2^{-2nl-k}}\): The only point at which the two circuits \(G^*_{f,x}\) and \(G^{*}_{f,x+1}\) in this case may differ is \((x^0_1,x^1,\alpha _1,...,x^0_n,x^n,\alpha _n)\) where \(\alpha _i\) is the inverse of a fixed injective one way function value \(P(\alpha _i)\). In this case, due to security of weak extractability obfuscator the claim holds. Assume to the contrary \(Pr[Q=1]\epsilon _{Q=1}>\delta >2^{-2nl-k}\). In this case, let \(\tau \) be the transcript (including the randomness to generate \(\mathsf {PKE}\) keys, \(\mathsf {PRF}\) keys along with chosen \(\alpha _i'\)s) between the challenger and the adversary till the point function key for function f is queried. We denote \(\tau \in \mathsf {good}\) if conditioned on \(\tau \), \(\epsilon _{\tau ,Q=1}>\epsilon _{Q=1}/2\). Then, using Lemma 2, one can show that \(Pr[\tau \in \mathsf {good}]>\epsilon _{Q=1}/2\).

Now, let us denote by set Z a set that contains indices in \(i\in [n]\) such that \(a^0_i\ne a^1_i\). Note that \(\alpha _i\) can be requested by the adversary in one of the two following ways: \(a^0_i=a^1_i\) and adversary queries for \(EK_i\) or adversary queries for an encryption of \((a^0_i,a^1_i)\) and challenger sends encryption as \((x^0_i,x^1_i,\alpha _i)\) with some probability. Let E denote the set of indices for which \(\alpha _i\)’s queried by the adversary through first method and S denote the set queried through second method. Then it holds that \(S \cup E \ne [n]\). This is because adversary cannot query for such cipher-texts and encryption keys in these hybrids since \(Q=1\) and in particular it holds that \(f(<\lbrace a^0_i\rbrace _{i\in S}, \lbrace a^0_i\rbrace _{i\in E}>) \ne f(<\lbrace a^1_i\rbrace _{i\in S}, \lbrace a^0_i\rbrace _{E}>)\). Here \(<,>\) denotes the permutation which sends a variable with subscript i to index i.

Now we let \(T \subsetneq [n]\) denote the set of \(\alpha _i\) for \(i \in [n]\) requested by D (either by querying cipher-text or by querying for \(EK_i\) such that \(a^0_i=a^1_i\)). We know that conditioned on \(\tau \) (randomness upto the point f is queried),

$$\mid Pr[D(\mathsf {H}_{x,3})=1/Q=1,\tau ] - Pr[D(\mathsf {H}_{x,4})=1/Q=1,\tau ] \mid \,>\,\epsilon _{Q=1}/2$$

For all \(t \subsetneq Z\),

$$\varSigma _{t} \mid Pr[D(\mathsf {H}_{x,3})=1\cap T=t/Q=1,\tau ] - Pr[D(\mathsf {H}_{x,4})=1 \cap T=t/Q=1,\tau ] \mid \,>\,\epsilon _{Q=1}/2$$

Since number of proper subsets of [n] is bounded by \(2^n\), there exists a set t such that

$$\mid Pr[D(\mathsf {H}_{x,3})=1\cap T=t/Q=1,\tau ] - Pr[D(\mathsf {H}_{x,4})=1 \cap T=t/Q=1,\tau ] \mid \,>\,\epsilon _{Q=1}/2^{n+1}$$

Now we construct an adversary \(\mathcal {A}\) that breaks the security of injective one way function with probability \(Pr[Q=1]\epsilon _{Q=1}/2^{n+1}\) that runs in time \(O(2^{2n}/\epsilon ^2_{Q=1})\). \(\mathcal {A}\) runs as follows:

  1. 1.

    \(\mathcal {A}\) invokes D. Then it does setup and generates \(\mathsf {PKE}\) keys and punctured PRF keys \(K'_i\) for all indices in [n] according to hybrid \(\mathsf {H}_{x,3}\).

  2. 2.

    \(\mathcal {A}\) gets injective one way function values from the injective one way function challenger \((P,P(\alpha _1),..,P(\alpha _n))\).

  3. 3.

    \(\mathcal {A}\) now guesses a random proper subset \(t\subset [n]\).

  4. 4.

    For all indices in \(i \in t\) it gets \(\alpha _i\) from the injective one way function challenger.

  5. 5.

    If \(EK_i\) is asked for any \(i\in t \cup Z\), it is generated as in \(\mathsf {H}_{x,3}\) and given out. Otherwise, \(\mathcal {A}\) aborts. We call the transcript till here \(\tau \).

  6. 6.

    When D asks for a key for f. If f is such that \(Q=0\), \(\mathcal {A}\) outputs \(\bot \). \(\mathcal {A}\) now constructs a distinguisher \(\mathcal {B}\) of obfuscation of circuits \(G^*_{f,x}\) and \(G^*_{f,x+1}\) as follows:

    • \(\mathcal {A}\) gets as a challenge obfuscation \(\tilde{C_f}\) which is an obfuscation \(G^*_{f,x}\) or \(G^*_{f,x+1}\).

    • \(\mathcal {A}\) gives this obfuscation to \(\mathcal {B}\) which invokes D from the point of the transcript \(\tau \) and gives this obfuscation to D.

    • When D asks for a cipher-text, if the queries are such that \(\mathcal {B}\) can generate it using \(\alpha _i \forall i \in t\) then answer the cipher-text query. Otherwise, it outputs 0.

    • If \(EK_i\) is asked by D for any \(i\in t \cup Z\), it is generated as in \(\mathsf {H}_{x,3}\) and given out. If any other encryption key is queried, it outputs 0.

    • If set of indices for which \(\alpha _i\)’s used to generate response to the queries (in the transcript \(\tau \) and the queries asked by D when run by \(\mathcal {B}\)) equals t it outputs whatever D outputs otherwise, \(\mathcal {B}\) outputs 0.

  7. 7.

    If t is correctly guessed as \(t^*\), it is easy to check that \(\mid Pr[\mathcal {B}(G^*_{f,x},G^*_{f,x+1},\mathcal {O}(G^*_{f,x})\),\(aux )=1]- Pr[ \mathcal {B}(G^*_{f,x},G^*_{f,x+1},\mathcal {O}(G^*_{f,x+1}),aux )=1] \mid \,>\,\epsilon _{Q=1}/2^{n+1} \). (Here aux is the information with \(\mathcal {A}\) required to run \(\mathcal {B}\) including \(\alpha _i\forall i \in t\), \(P(\alpha _i),PK^0_i\), \(PK^1_i,SK^0_i,SK^1_i ,K'_i\forall i \in [n]\) and transcript \(\tau \) till point 4). This is because,

    $$\mid Pr[\mathcal {B}(G^*_{f,x},G^*_{f,x+1},\mathcal {O}(G^*_{f,x}),aux )=1]- Pr[\mathcal {B}(G^*_{f,x},G^*_{f,x+1},\mathcal {O}(G^*_{f,x+1}),aux )=1 ]\mid =$$
    $$\mid Pr[D(\mathsf {H}_{x,3})=1\cap T=t/Q=1,\tau ] - Pr[D(\mathsf {H}_{x,4})=1 \cap T=t/Q=1,\tau ] \mid \,>\,\epsilon _{Q=1}/2^{n+1}$$
  8. 8.

    We finally run the extractor E of the weak extractability obfuscator using \(\mathcal {B}\) to extract a point \((x^0_1,x^1_1,\alpha _1,..,x^0_n,x^1_n,\alpha _n)\). (This extraction can be run as long as \(\epsilon _{Q=1}/2^{n+1}>2^{-3nl}\) implying \(\epsilon _{Q=1}>2^{-2nl-k}\) as otherwise there is nothing to prove and claim trivially goes through). This extractor runs in time \(O(t_{D}.2^{2n}/\epsilon ^2_{Q=1})\). Probability of success of this extraction is

    $$Pr[Q=1]\cdot Pr[\tau \text { is good}] \cdot Pr[\text { t is guessed correctly}]>Pr[Q=1]\cdot \epsilon _{Q=1}/2^{n+1}$$

Let \(\mu \) be the input length for injective one way function. We note the following cases:

Case 0: If \(Pr[Q=1]\epsilon _{Q=1} < O(2^{-2nl-k})\), in this case the claim goes through.

Case 1: If \(Pr[Q=1]\epsilon _{Q=1}/2^{n+1} < O(2^{-\mu ^{c_{owp2}}})\), in this case the claim goes through if \(\mu \) is set to be greater than \((3nl+k)^{1/c_{owp2}}\).

Case 2: If case 1 does not occur, then we must have that \(2^{2n}/\epsilon ^2_{Q=1}>2^{\mu ^{c_{owp1}}}\), implying that if \(\mu \) is greater than \((5nl+2k)^{1/c_{owp1}}\) the claim holds (due to the security of injective one way function P).

Hence, if \(\mu > max \lbrace (3nl+k)^{1/c_{owp2}}, (5nl+2k)^{1/c_{owp1}} \rbrace \), \(Pr[Q=1]\epsilon _{Q=1}<2^{-2nl-k}\) and the claim holds.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,4})=1] - Pr[D(\mathsf {H}_{x,5})=1] \mid \,<\,O(n\cdot 2^{-2nl-k}) \).

Proof. This claim follows from the security of the puncturable PRF’s. This is similar to the proof of the Claim 4.2.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,5})=1] - Pr[D(\mathsf {H}_{x,6})=1] \mid \,<\,O(p(k)\cdot 2^{-2nl-k}) \). Here \(p(\cdot )\) is a some polynomial.

Proof. This claim follows from the indistinguishability obfuscation security of the weak extractability obfuscator. This proof is similar the proof of the Claim 4.2.

Claim

For any PPT distinguisher D, \(\mid Pr[D(\mathsf {H}_{x,6})=1] - Pr[D(\mathsf {H}_{x,7})=1] \mid \,<\,O(n\cdot 2^{-2nl-k}) \).

Proof. This claim follows from the indistinguishability obfuscation security of the weak extractability obfuscator \(\mathcal {O}\). This proof is similar the proof of the Claim 4.2.

Combining all the claims above, we prove the lemma.

Lemma 10

For any PPT distinguisher D and \(x\in [2^{2ln}]\), \(\mid Pr[D(\mathsf {H}_{2^{2ln}+4+x})=1] - Pr[D(\mathsf {H}_{2^{2ln}+5+x})=1] \mid \,<\,O(v(k)\cdot 2^{-2nl-k}) \) for some polynomial v(k).

Proof. Proof of this lemma is similar to the proof of Lemma 9.

Combining all these lemmas above, we get that for any PPT D,

$$\mid Pr[D(\mathsf {H}_{0})=1] - Pr[D(\mathsf {H}_{2.2^{2ln}+6})=1] \mid \,<\,\mathsf {negl}(k)+ 2.2^{2nl}O(v(k)\cdot 2^{-2nl-k}) <\mathsf {negl}(k).$$