Skip to main content
Log in

On the power of rewinding simulators in functional encryption

  • Published:
Designs, Codes and Cryptography Aims and scope Submit manuscript

Abstract

In a seminal work, Boneh, Sahai and Waters (BSW) [TCC’11] showed that for functional encryption the indistinguishability notion of security (IND-Security) is weaker than simulation-based security (SIM-Security), and that SIM-Security is in general impossible to achieve. This has opened up the door to a plethora of papers showing feasibility and new impossibility results. Nevertheless, the quest for better definitions that (1) overcome the limitations of IND-Security and (2) the known impossibility results, is still open. In this work, we explore the benefits and the limits of using efficient rewinding black-box simulators to argue security. To do so, we introduce a new simulation-based security definition, that we call rewinding simulation-based security (RSIM-Security), that is weaker than the previous ones but it is still sufficiently strong to not meet pathological schemes as it is the case for IND-Security (that is implied by the RSIM). This is achieved by retaining a strong simulation-based flavour but adding more rewinding power to the simulator having care to guarantee that it can not learn more than what the adversary would learn in any run of the experiment. What we found is that for RSIM  the BSW impossibility result does not hold and that IND-Security is equivalent to RSIM-Security for attribute-based encryption in the standard model. Nevertheless, we prove that there is a setting where rewinding simulators are of no help. The adversary can put in place a strategy that forces the simulator to rewind continuously.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Agrawal et al. [3] shows that their impossibility result holds in a variant of the selective security model, called by [18fully non-adaptive model, where the adversary makes simultaneous key-generation and challenge message queries before seeing the public parameters.

  2. Precisely, the functional encryption scheme of [19] only achieves \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-sel-IND-Security but later [16] and [6] provided schemes that avoid the selective security model.

  3. Note that we do not refer to their latest eprint revision but at the specific version posted on 6 March 2014 that has been updated after and in the subsequent revisions represents an extended abstract of the paper appeared in [5].

  4. Precisely, we show a stronger result that \((0,\mathsf{poly},1)\)- RSIM-Security with negligible advantage is not achievable in the standard model in the auxiliary input setting (see Sect. 3). The auxiliary input setting has been already used by [10] in the same context.

  5. See [1, 10] for a discussion about this condition.

  6. Precisely, it would be possible at the cost of non-efficient simulation.

  7. Recall that \({\mathbf {x}}\) is a vector of challenge messages in which, for \(j\in [\ell ]\), the j-th component consists of a pair \((\mathsf{ind}_j,\mathsf{m}_j)\), where \(\mathsf{ind}_j\) is the “index“ and \(\mathsf{m}_j\) is the “payload“.

  8. A similar problem arises in the context of rewinding simulators for constant-round zero-knowledge as in [22].

  9. We remark that our inner-product is defined over \({\mathbb {Z}}_2\) so the predicate is different from that of [32].

  10. The challenge index output by the adversary consists of a tuple \((x_1,\ldots ,x_\ell )\) of vectors where each element \(x_i\in \{0,1\}^n\) for \(i=1,\ldots ,\ell \). For simplicity, henceforth we interpret such challenges as vectors in \(\{0,1\}^{n\cdot \ell }\).

  11. The authors of [18] proved this fact that will appear in the full version of their paper.

  12. For sake of simplicity we implicitly assume that the functionality is not parameterized by the security parameter but this can be generalized easily.

References

  1. Abdalla M., Bellare M., Neve G.: Robust encryption. In: Micciancio D. (ed.) TCC 2010: 7th Theory of Cryptography Conference, Zurich, 9–11 Feb. Lecture Notes in Computer Science, vol. 5978, pp. 480–497. Springer, Berlin (2010).

  2. Agrawal S., Freeman D.M., Vaikuntanathan V.: Functional encryption for inner product predicates from learning with errors. In: Lee D.H., Wang X. (eds.) Advances in Cryptology—ASIACRYPT 2011, Seoul, 4–8 Dec 2011. Lecture Notes in Computer Science, vol. 7073, pp. 21–40 . Springer, Berlin (2011).

  3. Agrawal S., Gorbunov S., Vaikuntanathan V., Wee H.: Functional encryption: new perspectives and lower bounds. In: Canetti R., Garay J.A. (eds.) Advances in Cryptology—CRYPTO 2013, Part II, Santa Barbara, 18–22 Aug 2013. Lecture Notes in Computer Science, vol. 8043, pp. 500–518. Springer, Berlin (2013).

  4. Agrawal S., Agrawal S., Badrinarayanan S., Kumarasubramanian A., Prabhakaran M., Sahai A.: Function private functional encryption and property preserving encryption: new definitions and positive results. Cryptology ePrint Archive, Report 2013/744, Version posted on 6 Mar 2014. http://eprint.iacr.org/2013/744/20140306:053744 (2014).

  5. Agrawal S., Agrawal S., Badrinarayanan S., Kumarasubramanian A., Prabhakaran M., Sahai A.: On the practical security of inner product functional encryption. In: Proceedings of Public-Key Cryptography—PKC 2015—18th IACR International Conference on Practice and Theory in Public-Key Cryptography, Gaithersburg, 30 Mar–1 Apr 2015, pp. 777–798 (2015).

  6. Ananth P., Boneh D., Garg S., Sahai A., Zhandry M.: Differing-inputs obfuscation and applications. Cryptology ePrint Archive, Report 2013/689. http://eprint.iacr.org/2013/689 (2013).

  7. Backes M., Müller-Quade J., Unruh D.: On the necessity of rewinding in secure multiparty computation. In: Vadhan S.P. (ed.) TCC 2007: 4th Theory of Cryptography Conference, Amsterdam, 21–24 Feb 2007. Lecture Notes in Computer Science, vol. 4392, pp. 157–173. Springer, Berlin (2007).

  8. Barbosa M., Farshim P.: On the semantic security of functional encryption schemes. In: Kurosawa K., Hanaoka G. (eds.) PKC 2013: 16th International Workshop on Theory and Practice in Public Key Cryptography, Nara, 26 Feb–1 Mar 2013. Lecture Notes in Computer Science, vol. 7778, pp. 143–161. Springer, Berlin (2013).

  9. Barkol O., Ishai Y.: Secure computation of constant-depth circuits with applications to database search problems. In: Shoup V. (ed.) Advances in Cryptology—CRYPTO 2005, Santa Barbara, 14–18 Aug 2005. Lecture Notes in Computer Science, vol. 3621, pp. 395–411. Springer, Berlin (2005).

  10. Bellare M., O’Neill A.: Semantically-secure functional encryption: possibility results, impossibility results and the quest for a general definition. In: Proceedings of Cryptology and Network Security—12th International Conference, CANS 2013, Paraty, 20–22 Nov 2013, pp. 218–234 (2013).

  11. Bellare M., Dowsley R., Waters B., Yilek S.: Standard security does not imply security against selective-opening. In: Pointcheval D., Johansson T. (eds.) Advances in Cryptology—EUROCRYPT 2012, Cambridge, 15–19 Apr 2012. Lecture Notes in Computer Science, vol. 7237, pp. 645–662. Springer, Berlin (2012).

  12. Boneh D., Boyen X.: Efficient selective identity-based encryption without random oracles. J. Cryptol 24(4), 659–693 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  13. Boneh D., Franklin M.K.: Identity-based encryption from the Weil pairing. In: Kilian J. (ed.) Advances in Cryptology—CRYPTO 2001, Santa Barbara, 19–23 Aug 2001. Lecture Notes in Computer Science, vol. 2139, pp. 213–229. Springer, Berlin (2001).

  14. Boneh D., Waters B.: Conjunctive, subset, and range queries on encrypted data. In: Vadhan S.P. (ed.) TCC 2007: 4th Theory of Cryptography Conference, Amsterdam, 21–24 Feb 2007. Lecture Notes in Computer Science, vol. 4392, pp. 535–554. Springer, Berlin (2007).

  15. Boneh D., Sahai A., Waters B.: Functional encryption: definitions and challenges. In: Ishai Y. (ed.) TCC 2011: 8th Theory of Cryptography Conference, Providence, 28–30 Mar 2011. Lecture Notes in Computer Science, vol. 6597, pp. 253–273. Springer, Berlin (2011).

  16. Boyle E., Chung K.-M., Pass R.: On extractability obfuscation. In: Lindell Y. (ed.) TCC 2014: 11th Theory of Cryptography Conference, San Diego, 24–26 Feb 2014. Lecture Notes in Computer Science, vol. 8349, pp. 52–73. Springer, Berlin (2014).

  17. Cocks C.: An identity based encryption scheme based on quadratic residues. In: Honary B. (ed.) 8th IMA International Conference on Cryptography and Coding, Cirencester, 17–19 Dec 2001. Lecture Notes in Computer Science, vol. 2260, pp. 360–363. Springer, Berlin (2001).

  18. De Caro A., Iovino V., Jain A., O’Neill A., Paneth O., Persiano G.: On the achievability of simulation-based security for functional encryption. In: Canetti R., Garay J.A. (eds.) Advances in Cryptology—CRYPTO 2013, Part II, Santa Barbara, 18–22 Aug 2013. Lecture Notes in Computer Science, vol. 8043, pp. 519–535. Springer, Berlin (2013).

  19. Garg S., Gentry C., Halevi S., Raykova M., Sahai A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: 54th Annual Symposium on Foundations of Computer Science, Berkeley, 26–29 Oct 2013, pp. 40–49. IEEE Computer Society Press, Berkeley (2013).

  20. Garg S., Gentry C., Halevi S., Sahai A., Waters B.: Attribute-based encryption for circuits from multilinear maps. In: Canetti R., Garay J.A. (eds.) Advances in Cryptology–CRYPTO 2013, Part II, Santa Barbara, 18–22 Aug 2013. Lecture Notes in Computer Science, vol. 8043, pp. 479–499. Springer, Berlin (2013).

  21. Gentry C.: Practical identity-based encryption without random oracles. In: Vaudenay S. (ed.) Advances in Cryptology—EUROCRYPT 2006, St. Petersburg, 28 May–1 June 2006. Lecture Notes in Computer Science, vol. 4004, pp. 445–464. Springer, Berlin (2006).

  22. Goldreich O., Kahan A.: How to construct constant-round zero-knowledge proof systems for NP. J. Cryptol. 9(3), 167–190 (1996).

    Article  MathSciNet  MATH  Google Scholar 

  23. Goldreich O., Micali S., Wigderson A.: Proofs that yield nothing but their validity and a methodology of cryptographic protocol design (extended abstract). In: 27th Annual Symposium on Foundations of Computer Science, Toronto, 27–29 Oct 1986, pp. 174–187. IEEE Computer Society Press, Berkely (1986).

  24. Goldwasser S., Micali S.: Probabilistic encryption. J. Comput. Syst. Sci. 28(2), 270–299 (1984).

    Article  MathSciNet  MATH  Google Scholar 

  25. Goldwasser S., Micali S., Rackoff C.: The knowledge complexity of interactive proof-systems (extended abstract). In: Proceedings of the 17th Annual ACM Symposium on Theory of Computing, Providence, 6–8 May 1985, pp. 291–304 (1985).

  26. Goldwasser S., Kalai Y.T., Popa R.A. , Vaikuntanathan V., Zeldovich N.: Reusable garbled circuits and succinct functional encryption. In: Boneh D., Roughgarden , Feigenbaum J., (eds.) 45th Annual ACM Symposium on Theory of Computing, Palo Alto, 1–4 June 2013, pp. 555–564. ACM Press, New York (2013).

  27. Gorbunov S., Vaikuntanathan V., Wee H.: Functional encryption with bounded collusions via multi-party computation. In: Safavi-Naini R., Canetti R. (eds.) Advances in Cryptology—CRYPTO 2012, Santa Barbara, 19–23 Aug 2012. Lecture Notes in Computer Science, vol. 7417, pp. 162–179. Springer, Berlin (2012).

  28. Gorbunov S., Vaikuntanathan V., We H.: Functional encryption with bounded collusions via multi-party computation. In: Safavi-Naini R., Canetti R. (eds.) CRYPTO. Lecture Notes in Computer Science, vol. 7417, pp. 162–179. Springer, Berlin (2012).

  29. Gorbunov S., Vaikuntanathan V., Wee H.: Attribute-based encryption for circuits. In: Boneh D., Roughgarden T., Feigenbaum J. (eds.) STOC, pp. 545–554. ACM, New York (2013).

  30. Goyal V., Pandey O., Sahai A., Waters B.: Attribute-based encryption for fine-grained access control of encrypted data. In: Juels A., Wright R.N., Vimercati S., (eds.) ACM CCS 06: 13th Conference on Computer and Communications Security, Alexandria, 30 Oct–3 Nov 2006, pp. 89–98. ACM Press, New York. Available as Cryptology ePrint Archive Report 2006/309 (2006).

  31. Iovino V.,  Żebrowski K.: Simulation-based secure functional encryption in the random oracle model. In: Proceedings of Progress in Cryptology— ATINCRYPT 2015—4th International Conference on Cryptology and Information Security in Latin America, Guadalajara, 23–26 Aug 2015, pp. 21–39 (2015).

  32. Katz J., Sahai A., Waters B.: Predicate encryption supporting disjunctions, polynomial equations, and inner products. In: Smart N.P. (ed.) Advances in Cryptology—EUROCRYPT 2008, Istanbul, 13–17 Apr 2008. Lecture Notes in Computer Science, vol. 4965, pp. 146–162. Springer, Berlin (2008).

  33. Lewko A.B., Okamoto T., Sahai A., Takashima K., Waters B.: Fully secure functional encryption: attribute-based encryption and (hierarchical) inner product encryption. In: Gilbert H. (ed.) Advances in Cryptology—EUROCRYPT 2010, French Riviera, 30 May–3 June 2010. Lecture Notes in Computer Science, vol. 6110, pp. 62–91. Springer, Berlin (2010).

  34. Okamoto T., Takashima K.: Adaptively attribute-hiding (hierarchical) inner product encryption. In: Pointcheval D., Johansson T. (eds.) Advances in Cryptology—EUROCRYPT 2012, Cambridge, 15–19 Apr 2012. Lecture Notes in Computer Science, vol. 7237, pp. 591–608. Springer, Berlin (2012).

  35. O’Neill A: Definitional issues in functional encryption. Cryptology ePrint Archive, Report 2010/556. http://eprint.iacr.org/ (2010).

  36. Sahai A., Waters B.R.: Fuzzy identity-based encryption. In: Cramer R. (ed.) Advances in Cryptology—EUROCRYPT 2005, Aarhus, 22–26 May 2005. Lecture Notes in Computer Science, vol. 3494, pp. 457–473. Springer, Berlin (2005).

  37. Shamir A.: Identity-based cryptosystems and signature schemes. In: Blakley G.R., Chaum D. (eds.) Advances in Cryptology—CRYPTO’84, Santa Barbara, 19–23 Aug 1984. Lecture Notes in Computer Science, vol. 196, pp. 47–53. Springer, Berlin (1984).

  38. Waters B.: Functional encryption for regular languages. In: Safavi-Naini R., Canetti R., (eds.) Advances in Cryptology—CRYPTO 2012, Santa Barbara, 19–23 Aug 2012. Lecture Notes in Computer Science, pp. 218–235. Springer, Berlin (2012).

Download references

Acknowledgments

Vincenzo Iovino is supported by the Luxembourg National Research Fund (FNR Grant No. 7884937). Part of this work was made while Vincenzo Iovino was at University of Warsaw supported by the WELCOME/2010-4/2 Grant founded within the framework of the EU Innovative Economy Operational Programme. We thank Abhishek Jain for helpful discussions and pointing out a bug in an earlier version of this manuscript. Vincenzo Iovino thanks Yu Li for his precious comments and Sadeq Dousti for invaluable discussions and for suggesting him this line of research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincenzo Iovino.

Additional information

Communicated by D. Jungnickel.

Appendices

Appendix 1: RSIM-Security \(\implies \) IND-Security

Theorem 7

Let \(\mathsf {FE}\) be a functional encryption scheme that is RSIM-Secure, then \(\mathsf {FE}\) is IND-Secure as well.

Proof

Suppose towards a contradiction that there exists an adversary \(\mathcal {A}=(\mathcal {A}_0,\mathcal {A}_1)\) that breaks the IND-Security of \(\mathsf {FE}\). Consider the following adversary \(\mathcal {B}^b=(\mathcal {B}_0^b,\mathcal {B}_1^b)\), for \(b\in \{0,1\}\), and distinguisher \({\mathcal D}\), for the RSIM-Security of \(\mathsf {FE}\).

figure d
figure e

Let \(\mathsf{IND}^{\mathsf {FE},b}_{\mathcal {A}}\) be an experiment identical to the IND-Security experiment except that the challenger always encrypts challenge vector \({\mathbf {x}}_b\) (instead of choosing one of the two challenges at random). Then, it holds that for any function \(\epsilon (\lambda )\) that is inverse of a polynomial:

$$\begin{aligned} \mathsf{IND}^{\mathsf {FE},0}_{\mathcal {A}}&=\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^0} \approx _\epsilon \mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^0}_{\mathsf{Sim}} =\mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}}, \end{aligned}$$

and

$$\begin{aligned} \mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}} \approx _\epsilon \mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}} =\mathsf{IND}^{\mathsf {FE},1}_{\mathcal {A}},~ \end{aligned}$$

where, more specifically:

  1. 1.

    \(\mathsf{IND}^{\mathsf {FE},0}_{\mathcal {A}}=\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^0}\) (i.e., the probability that \(\mathcal {A}\) wins in experiment \(\mathsf{IND}^{\mathsf {FE},0}_\mathcal {A}\) equals the probability that D outputs 1 on input the output of \(\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^0}\)) holds by definition of \(\mathcal {B}^0\) and \({\mathcal D}\).

  2. 2.

    \(\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^0}\approx _\epsilon \mathsf{IdealExp}^{\mathsf {FE}, \mathcal {B}^0}_{\mathsf{Sim}}\). This holds by the RSIM-Security of \(\mathsf {FE}\).

  3. 3.

    \(\mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^0}_{\mathsf{Sim}}=\mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}}\) holds because if \(\mathcal {A}\) breaks the IND-Security of \(\mathsf {FE}\), then with all but negligible probability, the queries issued by \(\mathcal {A}\) (and thus by \(\mathcal {B}\)) are such that \(F(k,\mathbf {x}_0)=F(k,\mathbf {x}_1)\) for any key k for which \(\mathcal {A}\) has issued a key-generation query.

  4. 4.

    \(\mathsf{IdealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}} \approx _\epsilon \mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}}\) holds again by the RSIM-Security of \(\mathsf {FE}\).

  5. 5.

    Finally, \(\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^1}_{\mathsf{Sim}}=\mathsf{IND}^{\mathsf {FE},1}_{\mathcal {A}}\) (i.e., the probability that \(\mathcal {A}\) wins in experiment \(\mathsf{IND}^{\mathsf {FE},1}_\mathcal {A}\) equals the probability that D outputs 1 on input the output of \(\mathsf{RealExp}^{\mathsf {FE},\mathcal {B}^1}\)) holds by definition of \(\mathcal {B}^1\) and \({\mathcal D}\).

But, if for any \(\epsilon \), \(\mathsf{IND}^{\mathsf {FE},0}_\mathcal {A}\approx _\epsilon \mathsf{IND}^{\mathsf {FE},1}_\mathcal {A}\), then \(\mathcal {A}\) does not break the IND-Security of \(\mathsf {FE}\), a contradiction. \(\square \)

Appendix 2: Proof of Theorem 5

Proof

(Simplified simulation) As explained before, for purposes of exposition, we first present a simplified simulation strategy in which the ouput of the simulator is “biased“ (i.e., it has different distribution than the output of the adversary in the real experiment) and then we illustrate how to remove such restriction.

Our simulator \(\mathsf{Sim}=(\mathsf{Sim}_0,\mathsf{Sim}_1)\) works as follows. \(\mathsf{Sim}_0\) takes in input the master public and secret key, the list \(\mathcal {Q}=(k_i,\mathsf{Sk}_{k_i},F(k_i,{\mathbf {x}}))_{i\in [q_1]}\), and the intentionally leaked information about the challenge messagesFootnote 7 \(F(\epsilon ,{\mathbf {x}})=(\mathsf{ind}_j,|\mathsf{m}_j|)_{j\in [\ell ]}\). Then, for each \(i\in [q_1]\), \(\mathsf{Sim}_0\) checks whether \(P(k_i,\mathsf{ind}_j)=1\) for some \(j\in [\ell ]\). If it is the case, then \(\mathsf{Sim}_0\) learns \(\mathsf{m}_j\). Furthermore, let \(\mathcal {X}\) the tuple of messages (indices with the relative payloads) learnt by \(\mathsf{Sim}_0\). Then, for each pair in \(\mathcal {X}\), \(\mathsf{Sim}_0\) generates a normal ciphertext by invoking the encryption algorithm. For all the other indices for which \(\mathsf{Sim}_0\) was not able to learn the corresponding payload, \(\mathsf{Sim}_0\) generates ciphertexts for those indices having a random payload. Let \({\mathbf {x}}^\star \) be the resulting message vector that the simulator used to produce the challenge ciphertexts.

Then, \(\mathsf{Sim}_0\) executes \(\mathcal {A}_1\) on input the so generated challenge ciphertexts. When \(\mathcal {A}_1\) invokes its key-generation oracle on input key k, \(\mathsf{Sim}_1\) is asked to generate a corresponding secret key given k and \(F(k,{\mathbf {x}})\). Now we have two cases:

  1. 1.

    \(P(k,\mathsf{ind}_j)=1\) for some \(j\in [\ell ]\): Then, \(\mathsf{Sim}\) learns \(\mathsf{m}_j\). If \(\mathsf{m}_j\) was already known by \(\mathsf{Sim}\), it means that the corresponding challenge ciphertext was well formed when \(\mathsf{Sim}_0\) invoked \(\mathcal {A}_1\). Then \(\mathsf{Sim}_1\) generates the secret key for k (using the master secret key) and the simulation continues. On the other hand, if \(\mathsf{Sim}_0\) didn’t know \(\mathsf{m}_j\) then the ciphertext corresponding to \(\mathsf{ind}_j\) was for a random message. Therefore, \(\mathsf{Sim}_0\) must halt \(\mathcal {A}_1\) and rewinds it. \(\mathsf{Sim}_0\) adds \((\mathsf{ind}_j,\mathsf{m}_j)\) to \(\mathcal {X}\) (and thus updates \({\mathbf {x}}^\star \)) and with this new knowledge \(\mathsf{Sim}_0\) rewinds \(\mathcal {A}_1\) on input the encryption of the new ciphertexts (i.e., the encryption of the new \({\mathbf {x}}^\star \)). The above reasoning easily extends to the case that \(P(k,\mathsf{ind}_j)=1\) for more than one j.

  2. 2.

    \(P(k,\mathsf{ind}_j)=0\) for all \(j\in [\ell ]\): In this case, a secret key for k can not be used to decrypt any of the challenge ciphertexts. Then, \(\mathsf{Sim}_1\) generates the secret key for k (using the master secret key) and the simulation continues.

If at some point the adversary halts giving some output the simulator outputs what the adversary outputs. This conclude the description of the simulator \(\mathsf{Sim}\).

It remains to show that the simulated challenge ciphertexts does not change \(\mathcal {A}_1\)’s behavior significantly. We call a key-generation query good if the simulator can answer such query without rewinding the adversary according to the previous rules. We call a completed execution of the adversary between two rewinds of the adversary a run. First, notice that the number of runs, meaning the number of times the simulator rewinds, is upper-bounded by the number of challenge messages \(\ell \) that is polynomial in the security parameter. In fact, each time that a query is not good and the simulator needs to rewind then the simulator learn a new pair \((\mathsf{ind}_j,\mathsf{m}_j)\), for some \(j\in [\ell ]\) and the same query will never cause a rewind anymore. In the last run, that in which all the key-generation queries are good, the view of the adversary is indistinguishable from that in the real game. This follows from the IND-Security of \(\mathsf {PIPE}\). In fact, the evaluations of the secret keys on the challenge ciphertexts in the real experiment give the same values than the evaluation of the simulated secret keys on the simulated ciphertexts in the ideal experiment since the secret keys are generated honestly. Therefore, the IND-Security guarantees that in this case the view in the real experiment is indistinguishable from that in the ideal experiment.

The actual simulation The previous simulation incurs the following problem: the output of the simulator could be biased. Consider for example an adversary that with probability 1 / 3 does not ask any query and with probability 2 / 3 asks a query that triggers a rewind, and outputs its computation. In the real experiment the transcript contains zero queries with probability 1 / 3 whereas the output of the ideal experiment contains zero queries with probability much larger than 1 / 3.Footnote 8 Above, we have shown that the last transcript of the simulator would be indistinguishable from the transcript of the adversary in the real experiment but this final output could be biased and corresponds to different runs of the adversary.

Thus, we need the following smarter strategy. First, recall that by standard use of Chernoff’s bound we can estimate a \((\beta ,\gamma )\)-approximation of a random variable, where the estimate is \(\beta \)-close with probability \(1- \gamma \). Moreover, this can be made by sampling the random variable a number of times that is polynomial in \(1/\beta \) and logarithmic in \(1/\gamma \). Let \(\mu \) be some fixed negligible function and \(\nu \) be the the distinguishing advantage we wish to achieve (see Definition  3). Let \(i=0\) to \(\ell \), the simulator makes the following.

Consider the experiment \(X_i\) in which the simulator executes the adversary in a run where the information it learnt consists of the pairs \((\mathsf{ind}_j,\mathsf{m}_j)\) for \(j=1,\ldots ,i\), and we assume that for \(i=0\) the simulator starts the run with random pairs. The run is executed as described in the simplified simulation, where if the adversary triggers a rewind then the simulator outputs a dummy value, otherwise the simulator outputs what the adversary outputs.

We denote by \(p_i\) the probability that in experiment \(X_i\) the adversary triggers a rewind. Setting \(\nu '=\nu ^{1/2}/\ell \), the simulator computes a \((\nu ',\delta )\)-estimate \(\tilde{p}_i\) for \(p_i\) for some negligible function \(\delta \) (the reason for setting \(\nu '\) to such value will be clear at the end of the analysis). If the estimate \(\tilde{p}_i\le \mu \), then the simulator executes the adversary in experiment \(X_i\) and if the adversary terminates without triggering a rewind, the simulator outputs what the adversary outputs, otherwise the simulator outputs a dummy value. Instead, if the estimate is greater than \(\mu \), then simulator increments i and proceeds to next step.

Let us compute the advantage of a PPT distinguisher in telling apart the real from the ideal experiment. By assumption on the estimate and by construction of the simulator, the output of the simulator is the output of the adversary in experiment \(X_1\) with probability at most \(w_1=(1- \delta )(\mu +\nu ')\) and is the output of the adversary in experiment \(X_2\) with probability at most \(a_2(1- \delta )(\mu +\nu ')\), where \(a_2=1-q_1<1\), and so forth. Therefore, the output of the simulator is the output of the adversary in experiment \(X_i\) with probability at most \((1- \delta )(\mu +\nu ')\).

If the output of the simulator equals the output of the adversary in experiment \(X_i\), then the distinguishing advantage is at most \(\nu '\) up to some negligible factor. Indeed, if the adversary does not trigger a rewind the two experiment are computationally indistinguishable by the IND-Security and in experiment \(X_i\) the adversary triggers a rewind with probability at most \(\mu +\nu '\) and \(\mu \) is negligible. By definition of \(\nu '\), it follows that the overall advantage is at most \(\ell \nu '^2=\nu \) up to a negligible factor. \(\square \)

Appendix 3: Positive results for PE with private-index

In this section we go further showing equivalences for PE with private-index for several functionalities including Anonymous IBE, inner-product over \({\mathbb {Z}}_2\), monotone conjunctive Boolean formulae, and the existence of RSIM-Secure schemes for all classes of \(\mathsf{NC}_0\) circuits.

As before, because in Appendix 1, we show that RSIM-Security implies IND-Security, to establish the equivalence for the functionalities we study, it is enough to prove the other direction, namely that IND-Security implies RSIM-Security.

Abstracting the properties needed by the simulator A closer look at the Proof of Theorem 5 hints some abstract properties that a predicate has to satisfy in order for the simulator to be able to produce an indistinguishable view. We identify the following two properties.

The execution of the simulator is divided in runs. At run j, the simulator invokes the adversary on input a ciphertext for message \(x_j\), whereas the adversary chose x, and keeps the invariant that \(x_j\) gives the same results than x respect to the queries asked by the adversary until that run.

At some point the adversary asks a query k for which \(F(k,x)\ne F(k,x_j)\ne \bot \) thus the simulator is not able to answer the query in this run. But if the functionality has the property (1) that it is easy to pre-sample a new value \(x_{j+1}\) that satisfies all queries including the new one, the simulator can rewind the adversary this time on input an encryption of value \(x_{j+1}\).

This is still not sufficient since there is no bound on the maximum number of rewinds needed by the simulator so we have to require the property (2) to force the simulation progresses towards a maximum.

To give a clear example, consider how a simulator could work for Anonymous IBE. Suppose that the adversary chooses as challenge identity crypto and the simulator chooses aaaaa as simulated identity for the ciphertext the simulator will pass to the adversary. Then, the adversary issues a query for identity bbbbb and the simulator learns that the predicate is not satisfied against, so the query gives the same evaluation on both the challenge identity and the simulated identity. This is coherent with the query, so the simulator can continue the simulation.

Now, suppose that the adversary issues the query for identity crypto. Then, the simulated identity is no more compatible with the new query and the simulator has to rewind the adversary but, since the simulator has learnt the challenge identity crypto and the corresponding payload exactly, in the next run the simulator is able to finish the simulation perfectly. This simulation strategy is simplified, and as we explained in Sect. 5 the simulator also need to guarantee that the output is not biased. In Appendix 3b, we show how to implement a more complicated strategy for the predicate inner-product over \({\mathbb {Z}}_2\).

1.1 Appendix 3a: Equivalence for anonymous IBE

The following theorem is an extension of Theorem 5.

Theorem 8

Let \(\mathsf {AIBE}\) be an Anonymous IBE scheme \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-INDSecure. Then, \(\mathsf {AIBE}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)- RSIM-Secure as well.

Intuition Notice that, in an Anonymous IBE scheme the ciphertext does not leak the identity for which it has been generated and thus the special key \(\epsilon \) does not provide this information as for a public-index scheme. Despite this, when the adversary issues a key-generation query for a key k such that \(F(k,x)\ne \perp \), then the simulator learns that x is a message for index (or identity for the case of AIBE) k and payload F(kx). Thus, the simulator rewinds the adversary on input a freshly generated ciphertext for that pair and can safely generate an honest secret key for k upon request.

Another important difference with the Proof of Theorem 5 is that the simulator could be forced to rewind without gaining any new knowledge and this could result in a never ending simulation. This happens for example in the following case: Let x a challenge message chosen by the adversary and let \(x^\star \) the message chosen by the simulator to simulate the ciphertext for x. Then, if the adversary issues a key-generation query for key k such that \(F(k,x)=\perp \) but \(F(k,x^\star )\ne \perp \), then the simulator is forced to rewind without gaining any new knowledge and this could happen indefinitely. But, the IND-Security of \(\mathsf {AIBE}\) scheme guarantees that such situation can happen only with negligible probability, and thus the simulator can just abort in such cases.

Proof

(Simplified simulation) Our simulator \(\mathsf{Sim}=(\mathsf{Sim}_0,\mathsf{Sim}_1)\) works as follows. \(\mathsf{Sim}_0\) takes in input the master public and secret key, the list \(\mathcal {Q}=(k_i,\mathsf{Sk}_{k_i},F(k_i,{\mathbf {x}}))_{i\in [q_1]}\), and the intentionally leaked information about the challenge messages \(F(\epsilon ,{\mathbf {x}})=(|\mathsf{ind}_j|,|\mathsf{m}_j|)_{j\in [\ell ]}\). Then, for each \(i\in [q_1]\), \(\mathsf{Sim}_0\) checks whatever \(F(k_i,x_j)\ne \perp \) for some \(j\in [\ell ]\). If it is the case, then \(\mathsf{Sim}_0\) learns that message \(x_j\) is for identity \(\mathsf{ind}_j=k_i\) and payload \(\mathsf{m}_j=F(k_i,x_j)\).

Let \(\mathcal {X}\) the set of tuple of the following form \((j,\mathsf{ind}_j,\mathsf{m}_j)\) learnt by \(\mathsf{Sim}_0\). Then, for each pair in \(\mathcal {X}\), \(\mathsf{Sim}_0\) generates a normal ciphertext for message \(x_\mathsf {j}^\star =(\mathsf{ind}^\star _j,\mathsf{m}^\star _j)\), with \(\mathsf{ind}^\star _j=\mathsf{ind}_j\) and \(\mathsf{m}^\star _j=\mathsf{m}_j\), by invoking the encryption algorithm. For all the other positions k for which \(\mathsf{Sim}_0\) was not able to learn the corresponding index and payload, \(\mathsf{Sim}_0\) generate a ciphertext for random \(x_k^\star =(\mathsf{ind}^\star _k,\mathsf{m}^\star _k)\).

Then, \(\mathsf{Sim}_0\) executes \(\mathcal {A}_1\) on input the challenge ciphertexts \((\mathsf{Ct}^\star _j)_{j\in [\ell ]}\), where \(\mathsf{Ct}^\star _j\) is for message \(x_\mathsf {j}^\star =(\mathsf{ind}^\star _j,\mathsf{m}^\star _j)\) as described above. When \(\mathcal {A}_1\) invokes its key-generation oracle on input key k, \(\mathsf{Sim}_1\) is asked to generate a corresponding secret key given k and \(F(k,{\mathbf {x}})\). Now we have the following cases:

  1. 1.

    If for each \(j\in [\ell ]\) such that \(F(k,x_j)\ne \perp \), \((j,k,F(k,x_j))\in \mathcal {X}\): Then we have two sub-cases:

    1. (a)

      If there exists and index \(\mathsf {j}\in [\ell ]\) such that \(F(k,x_\mathsf {j})=\perp \) but \(F(k,x_\mathsf {j}^\star )\ne \perp \) then \(\mathsf{Sim}_0\) aborts.

    2. (b)

      Otherwise, \(\mathsf{Sim}_1\) honestly generates a secret key \(\mathsf{Sk}_k\) for key k. Notice that it holds that \(F(k,x_\mathsf {j}^\star )=F(k,x_j)\) for all \(j\in [\ell ]\).

  2. 2.

    If there exists an index \(j\in [\ell ]\) such that \(F(k,x_j)\ne \perp \) but \((j,k,F(k,x_j))\notin \mathcal {X}\): Then \(F(k,x_\mathsf {j}^\star )\ne F(k,x_j)\) with high probability. Thus \(\mathsf{Sim}_0\) adds \((j,k,F(k,x_j))\) to \(\mathcal {X}\) and rewinds \(\mathcal {A}_1\) on freshly generated ciphertexts based on the information \(\mathsf{Sim}_0\) has collected in \(\mathcal {X}\) so far.

  3. 3.

    If for all \(j\in [\ell ]\), \(F(k,x_j)=\perp \): Then we have two sub-cases:

    1. (a)

      If there exists and index \(\mathsf {j}\in [\ell ]\) such that \(F(k,x_\mathsf {j})=\perp \) but \(F(k,x_\mathsf {j}^\star )\ne \perp \) then \(\mathsf{Sim}_0\) aborts.

    2. (b)

      Otherwise, \(\mathsf{Sim}_1\) honestly generates a secret key \(\mathsf{Sk}_k\) for key k. Notice that it holds that \(F(k,x_\mathsf {j}^\star )=F(k,x_j)=\perp \) for all \(j\in [\ell ]\).

If after a query the simulator has got to rewind the adversary, we say that such query triggered a rewind. If at some point the adversary halts giving some output, then the simulator outputs what the adversary outputs. This conclude the description of the simulator \(\mathsf{Sim}\).

Let us first bound the probability that the simulator aborts during its simulation, this happens in cases 1.(a) or 3.(a). Let us focus on case 1.(a), the other one is symmetric. Notice that when case 1.(a) happens then \(F(k,x_\mathsf {j})=\perp \) but \(F(k,x_\mathsf {j}^\star )\ne \perp \), meaning that \(\mathsf{ind}_\mathsf {j}\ne k\) and \(\mathsf{ind}^\star _\mathsf {j}=k\), and that all the previous key-generation queries are good, meaning that no rewind has been triggered. Therefore, if this event happens with non-negligible probability, \(\mathcal {A}\) can be used to build another adversary \(\mathcal {B}\) that distinguishes between the encryption of \(x_\mathsf {j}\) and \(x_\mathsf {j}^\star \) with the same probability, thus contradicting the IND-Security of the scheme. Precisely, \(\mathcal {B}\) simulates the view to \(\mathcal {A}\) as described before (i.e., simulating the interface with the simulator) and returns as its challenges two messages with indices \(\mathsf{ind}_0=\mathsf{ind}_j\) and \(\mathsf{ind}_1=\mathsf{ind}^\star _j\), where the two indices are as before. Then, \(\mathcal {B}\) runs \(\mathcal {A}\) on some ciphertext that is identical to that described before except that \(\mathsf{Ct}^\star _j\) is set to the challenge ciphertext received from the challenger of the IND-Security game. If at some point \(\mathcal {A}\) asks a query for identity \(\mathsf{ind}^\star _j\), then \(\mathcal {B}\) outputs 1 as its guess, otherwise \(\mathcal {B}\) outputs 0 as its guess. Notice that if the challenge ciphertext for \(\mathcal {B}\) is for the challenge message with identity \(\mathsf{ind}_1=\mathsf{ind}^\star _j\), \(\mathcal {B}\) perfectly simulated the view of \(\mathcal {A}\) when interacting with the above simulator, and thus, by hypothesis on the non-negligible probability of occurrence of the case 1.(a), \(\mathcal {B}\) outputs 1 with non-negligible probability. On the other hand, if the challenge ciphertext is for the challenge message with identity \(\mathsf{ind}_0=\mathsf{ind}_j\), then the view of \(\mathcal {A}\) is completely independent from \(\mathsf{ind}^\star _j\), so the probability that \(\mathcal {A}\) asks a query for such identity is negligible and thus \(\mathcal {B}\) outputs 0 with overwhelming probability.

Finally, notice that the number of runs, meaning the number of times the simulator makes a rewind (a rewind happens when case 2. occurs), is upper-bounded by the number of challenge messages \(\ell \) that is polynomial in the security parameter. In fact, every time that a query is not good and the simulator needs to rewind the adversary, the simulator learns a new pair \((\mathsf{ind}_j,\mathsf{m}_j)\), for some \(j\in [\ell ]\), and the same query will never cause a rewind anymore. In the last run, that in which all the key-generation queries are good, the view of the adversary is indistinguishable from that in the real game. This follows from the IND-Security of \(\mathsf {AIBE}\) by noting that the evaluations of the secret keys on the challenge ciphertexts in the real experiment give the same values than the evaluation of the simulated secret keys on the simulated ciphertexts in the ideal experiment since the secret keys are generated honestly. Therefore, the IND-Security guarantees that in this case the view in the real experiment is indistinguishable from that in the ideal experiment.

Non-biased simulation We stress that this is a simplified simulation and the simulator also needs to guarantee that the output is not biased. This can be made as explained in the security reduction of Theorem 5. \(\square \)

1.2 Appendix 3b: Equivalence for inner-product over \({\mathbb {Z}}_2\)

The functionality inner-product over \({\mathbb {Z}}_2\) (\(\textsf {IP}\))Footnote 9 is defined in the following way. It is a family of predicates with key space \(K_n\) and index space \(I_n\) consisting of binary strings of length n, and for any \(k\in K_n,x\in I_n\) the predicate \(\textsf {IP}(k,x)=1\) if and only if \(\sum _{i\in [n]}k_i\cdot x_i=0 \mod 2\).

Henceforth, we assume that the reader is familiar with the notion of pre-image samplability introduced by O’Neill [35].

In our positive results for \(\textsf {IP}\) over \({\mathbb {Z}}_2\) we use the following theorem.

Theorem 9

[35] The functionality \(\textsf {IP}\) over \({\mathbb {Z}}_2\) is pre-image samplable.

Theorem 10

If a predicate encryption scheme \(\mathsf {PE}\) for \(\textsf {IP}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-INDSecure then \(\mathsf {PE}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)- RSIM-Secure as well.

Proof

(Simplified simulation) The proof follows the lines of the Theorem 5. For simplicity we assume that the adversary outputs a challenge message with the payload set to 1, i.e., the functionality returns values in \(\{0,1\}\), but this can be easily generalized by handling the payload as in the Proof of Theorem 5.

Let \(x=(x_1,\ldots ,x_\ell )\in \{0,1\}^{n\cdot \ell }\) be the challenge indexFootnote 10 output by the adversary \(\mathcal {A}_0\) and let \((w_i)_{i=1}^{q_1}\) be the queries asked by \(\mathcal {A}_0\) (i.e. the queries asked before seeing the challenge ciphertexts).

As usual we divide the execution of the simulator in runs and in any run the simulator keeps an index \(x^0=(x^0_1, \ldots ,x^0_\ell )\in \{0,1\}^{n\cdot \ell }\) that uses to encrypt the simulated ciphertext given to the adversary in that run.

Let \(Y_i\) be a matrix in \(\{0,1\}^{(q_1+i-i)\times n}\) where the rows \(y_1,\ldots ,y_{q_1+i-1}\) of \(Y_i\) are such that the first \(q_1\) rows \(y_1,\ldots ,y_{q_1}\) consist of the vectors \(w_1,\ldots ,w_{q_1}\) (i.e., \(y_1=w_1, \ldots ,y_{q_1}=w_{q_1}\)) and for each \(j=1,\ldots ,i-1\) the row \(y_{q_1+j}\) of \(Y_i\) corresponds to the last query asked by \(\mathcal {A}_1\) in run j (as it will be clear soon, in any run i, if the last query asked by the adversary in such run will trigger a rewind, then only such query is put in the matrix, and not any other previous query asked by the adversary in run i).

Furthermore, for any \(i\ge 1\) and any \(j\in [\ell ]\), let \(b_{i,j}\in \{0,1\}^{q_1+i-1}\) be the column vector such that \(b_{i,j}[k]=\textsf {IP}(y_k,x_j),k=1, \ldots ,q_1+i-1\). During the course of the simulation, the simulator will guarantee the following invariant: at the beginning of any run \(i\ge 1\), for any \(j\in [\ell ]\), \(Y_i\cdot x^0_j=b_{i,j}\).

In the first run the simulator runs the adversary with input a ciphertext that encrypts an index \(x^0=(x^0_1,\ldots ,x^0_\ell )\in \{0,1\}^{n\cdot \ell }\) such that for any \(j\in [\ell ]\), \(Y_1\cdot x^0_j=b_{1,j}\). The simulator can efficiently find such vector by using the PS of IP guaranteed by Theorem 9. When in a run \(i\ge 1\) the adversary makes a query for a vector \(y\in \{0,1\}^n\) we distinguish two mutually exclusive cases. executed).

  1. 1.

    The vector y is a linear combination of the rows of \(Y_i\). Then, by the invariant property it follows that for any \(j\in [\ell ]\), \(\textsf {IP}(y,x_j)=\textsf {IP}(y,x^0_j)\), and the simulator continues the simulation answering the query as usual (i.e., by giving to the adversary \(\mathcal {A}_1\) the secret key for y generated honestly).

  2. 2.

    The vector y is not a linear combination of the rows of \(Y_i\). Then, the simulator could not be able to answer this query. In this case, we say that the query triggered a rewind and the simulator rewinds the adversary \(\mathcal {A}_1\) as follows. The simulator updates \(Y_{i+1}\) by adding the new row y to \(Y_i\) and uses the PS of IP guaranteed by Theorem 9 to efficiently find a new vector \(x'=(x'_1,\ldots ,x'_\ell )\in \{0,1\}^{n\cdot \ell }\) such that for any \(j\in [\ell ]\), \(Y_{i+1}\cdot x'_j=b_{i+1,j}\) (i.e., the PS algorithm is invoked independently for each equation \(Y_{i+1}\cdot x'_j=b_{i+1,j}\)). Finally, the simulator rewinds the adversary by invoking it with input the encryption of \(x'\) and updates \(x^0\) setting it to \(x'\). Notice that at the beginning of run \(i+1\) the invariant is still satisfied.

At the end of the last run, the simulator outputs what the adversary outputs. It is easy to see that the simulator executes at most n runs since in any run \(i>2\) the rank \(Y_{i}\) is greater than the rank of \(Y_{i-1}\) and for any \(i\ge 1\) the rank of \(Y_i\) is at most n.

Finally, notice that at the beginning of the last run the invariant guarantees that for any query y asked by \(\mathcal {A}_0\) and for any \(j\in [\ell ]\) \(\textsf {IP}(y,x_j)=\textsf {IP}(y,x^0_j)\). Furthermore, since in the last run no query has triggered a rewind, then any query asked by \(\mathcal {A}_1\) in the last run still satisfies this property. Therefore, by the IND-Security of the scheme, it follows that the output of the simulator is indistinguishable from that of the adversary in the real game.

Non-biased simulation We stress that this is a simplified simulation and the simulator also needs to guarantee that the output is not biased. This can be made as explained in the security reduction of Theorem 5. \(\square \)

RSIM- Security for \(\mathsf{NC}_0\) circuits. Recall that \(\mathsf{NC}_0\) is the class of all family of Boolean circuits of polynomial size and constant depth with AND, OR, and NOT gates of fan-in at most 2. It is a known fact that circuits in \(\mathsf{NC}_0\) with n-bits input and one-bit output can be expressed as multivariate polynomials \(p(x_1,\ldots ,x_n)\) over \({\mathbb {Z}}_2\) of constant degree.

Furthermore, you can encode such polynomials as vectors in \({\mathbb {Z}}_2^{n^m}\) for some constant m and evaluate them at any point using the inner-product predicate. Therefore, it is easy to see that the previous proof implies naturally the existence of a RSIM-Secure FE scheme for any family of circuits in \(\mathsf{NC}_0\) but we omit the details.

Theorem 11

If there exists predicate encryption scheme for \(\textsf {IP}\) that is \((\mathsf{poly},\mathsf{poly}, \mathsf{poly})\)-INDSecure then there exists a predicate encryption scheme \(\mathsf {PE}\) for any family of circuits in \(\mathsf{NC}_0\) that is \((\mathsf{poly},\mathsf{poly}, \mathsf{poly})\)- RSIM-Secure.

Despite their weakness, \(\mathsf{NC}_0\) circuits can be employed for many practical applications (see [9]).

1.3 Appendix 3c: Equivalence for monotone conjunctive Boolean formulae

The functionality monotone conjunctive Boolean formulae (MCF) is defined in the following way. It is a family of predicates with key space \(K_n\) consisting of monotone (i.e., without negated variables) conjunctive Boolean formulae over n variables (i.e., a subset of indices in [n]) and index space \(I_n\) consisting of assignments to n Boolean variables (i.e., binary strings of length n), and for any \(\phi \in K_n,x\in I_n\) the predicate \(\mathsf{MCF}(\phi ,x)=1\) if and only if the assignment x satisfies the formula \(\phi \). If a formula \(\phi \subseteq [n]\) contains the index i, we say that \(\phi \) has the i-th formal variable set.

The reader may have noticed that PE for MCF is a special case of PE for the family of all conjunctive Boolean formulae introduced by [14]. Though the monotonicity weakens the power of the primitive, it has still interesting applications like PE for subset queries as shown by [14]. We point out that the monotonicity is fundamental to implement our rewinding strategy. In fact, (under some complexity assumption) the functionality that computes the family of all conjunctive Boolean formulae is not PS,Footnote 11 so it is not clear whether an equivalence between \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-IND-Security and \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-RSIM-Security can be established for this primitive. On the other hand, weakening the functionality allowing only monotone formulae, we are able to prove the following theorem.

Theorem 12

If a predicate encryption scheme \(\mathsf {PE}\) for \(\mathsf{MCF}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-INDSecure then \(\mathsf {PE}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-RSIMSecure as well.

Proof sketch

(Simplified simulation) The proof follows the lines of the previous equivalence theorems and is only sketched outlining the differences. Let \(x=(x_1,\ldots ,x_\ell )\) be the challenge index (i.e., assignment) vector chosen by the adversary \(\mathcal {A}_0\) that the simulator does not know.

The simulator can easily sample an index vector \(x^0=(x_1^0,\ldots ,x^0_\ell )\) such that for any \(i\in [\ell ]\), \(x^0_i\) satisfies the equations: \(\mathsf{MCF}(\phi ,x^0_i)=\mathsf{MCF}(\phi ,x_i)\) for any query \(\phi \) asked by \(\mathcal {A}_0\) before seeing the challenge ciphertexts.

This can be done by the simulator in the following way just having the evaluations of the assignments on the formulae. In full generality, fix an arbitrary set of formulae \(A=\{\phi _i\}_{i\in [q]}\) and their evaluations over some (hidden) assignment \(x=(x_1,\ldots ,x_\ell )\). For any \(j\in [\ell ]\) and any position \(k\in [n]\), the simulator sets the k-th bit of \(x^0_j\) to be 1 or 0 according to the following rules.

If there exists some \(\phi \in A\) that has the k-th formal variable set and \(x_j\) satisfies \(\phi \) (the simulator has this information because it knows the evaluation of \(\phi \) on \(x_j\)), then the k-th bit of \(x^0_j\) is set to 1, otherwise (i.e., whether either the k-th formal variable of \(\phi \) is not set or \(x_j\) does not satisfy \(\phi \)) it is set to 0.

It is easy to see that \(x^0\) satisfies the previous equations with respect to the set of formulae A and thus is a valid pre-image of x. As usual, we divide the execution of the simulation in runs.

During the course of the simulation, the simulator will guarantee the invariant that at the beginning of any run, the index vector \(x^0\) satisfies all equations with respect to the (hidden) vector x and to all queries asked by the adversary. If a new query does not satisfy such equations, then the simulator has to find a new pre-image that satisfies all the equations including the new one.

This is done as before by pre-sampling according to the above rules. Notice that once a bit in some index \(x^0_j\) is set to 1, it is not longer changed. Thus, it follows that the number of runs is upper bounded by the bit length of x. Therefore, if \(\mathsf {PE}\) is IND-Secure, the simulator can conclude the simulation and produce an output indistinguishable from that of the adversary as desired.

Non-biased simulation We stress that this is a simplified simulation and the simulator also needs to guarantee that the output is not biased. This can be made as explained in the security reduction of Theorem 5. \(\square \)

1.4 Appendix 3d: Predicates with polynomial size key space

Boneh et al. [15] (see also [14]) presented a generic construction for functional encryption for any functionality F where the key space K has polynomial size that can be proven \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-IND-Secure in the standard model and a modification that can be proven \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-SIM-Secure in the random oracle model.

Bellare and O’Neill [10] proved the \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-SIM-Security of their scheme assuming that the underlying PKE scheme is secure against key-revealing selective opening attack (SOA-K) [11]. On the other hand we prove that the construction is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-RSIMSecure assuming only IND-CPA PKE that is a weaker assumption than SOA-K PKE needed in [10].

The construction of Boneh et al. is the following. Let \(s = |K| - 1\) and \(K = (k_0=\epsilon , k_1,\ldots , k_s)\).Footnote 12

The brute force functional encryption scheme realizing F uses a semantically secure public-key encryption scheme \(\mathcal {E}=(\mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})\) and works as follows:

  1. 1.

    \(\mathsf{Setup}(1^\lambda )\): for \(i = 1,\ldots ,s\), run \((\mathcal {E}.\mathsf{pk}_i,\mathcal {E}.\mathsf sk_i)\leftarrow \mathcal {E}. \mathsf{KeyGen}(1^\lambda )\) and output \(\mathsf{Mpk}=(\mathcal {E}.\mathsf{pk}_1,\ldots , \mathcal {E}.\mathsf{pk}_s)\) and \(\mathsf{Msk}=(\mathcal {E}.\mathsf sk_1,\ldots ,\mathcal {E}.\mathsf sk_s)\).

  2. 2.

    \(\mathsf{KeyGen}(\mathsf{Msk}, k_i)\): output \(\mathsf sk_i {:}{=} \mathcal {E}.\mathsf sk_i\).

  3. 3.

    \(\mathsf{Enc}(\mathsf{Msk},x)\): output \(\mathsf{Ct}{:}{=}(F(\epsilon ,x), \mathcal {E}.\mathsf{Enc}(\mathcal {E}.\mathsf{pk}_1,F(k_1,x)), \ldots , \mathcal {E}.\mathsf{Enc}(\mathcal {E}.\mathsf{pk}_s,F(k_s,x)))\).

  4. 4.

    \(\mathsf{Dec}(\mathsf sk_i,\mathsf{Ct})\): output \(\mathsf{Ct}[0]\) if \(\mathsf sk_i = \epsilon \), and output \(\mathcal {E}.\mathsf{Dec}(\mathcal {E}.\mathsf sk_i,\mathsf{Ct}[i])\) otherwise.

Theorem 13

Let \(\mathsf {FE}\) be the above \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-INDSecure functional encryption scheme for the functionality F. Then, \(\mathsf {FE}\) is \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-RSIMSecure as well.

Proof sketch

(Simplified simulation) The security reduction uses the same ideas of those in the Sect. 5 and Appendix 3. Roughly, the strategy of the simulator is the following. Again, we divide the execution of the simulator in runs.

Let \((x_1,\ldots ,x_\ell )\) be the vector of challenge messages chosen by the adversary and unknown to the simulator. At the beginning of the first run, the simulator executes the adversary on input ciphertexts \((\mathsf{Ct}_1,\ldots ,\mathsf{Ct}_\ell )\) that encrypt dummy values.

Recall that for any \(i\in [\ell ]\), \(\mathsf{Ct}_i[j]\) is supposed to encrypt \(F(k_j,x_i)\). When the adversary issue a key-generation query \(k_j\), the simulator learns \((F(k_j,x_1),\ldots ,F(k_j,x_\ell ))\). Then, the simulator rewinds the adversary executing it with input a new tuple of ciphertexts \((\mathsf{Ct}_1',\ldots ,\mathsf{Ct}_n')\) where for each \(i\in [\ell ],j=1,\ldots ,s\), \(\mathsf{Ct}'_i[j]\) encrypts \(F(k_j,x_i)\).

After at most \(s+1\) runs, the simulated ciphertext encrypts the same values as in the real game, and the simulator terminates returning the output of the adversary. This concludes the proof.

Non-biased simulation We stress that this is a simplified simulation and the simulator also needs to guarantee that the output is not biased. This can be made as explained in the security reduction of Theorem 5.

FE with multi-bit output. Notice that a predicate encryption scheme for predicate P implies a predicate encryption scheme for the same predicate where the payload is fixed to 1 (meaning that the predicate is satisfied). This in turn implies a functional encryption for the functionality P (where the evaluation algorithm of the FE scheme runs the evaluation algorithm of the PE scheme and outputs 0 if the PE scheme returns \(\bot \) and 1 otherwise).

Finally, the latter implies a functional encryption scheme for the class of circuits with multi-bit output that extends P in the obvious way. These implications preserve the \((\mathsf{poly},\mathsf{poly},\mathsf{poly})\)-RSIM- Security. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Caro, A., Iovino, V. On the power of rewinding simulators in functional encryption. Des. Codes Cryptogr. 84, 373–399 (2017). https://doi.org/10.1007/s10623-016-0272-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10623-016-0272-x

Keywords

Mathematics Subject Classification

Navigation